How to BACKUP LOG WITH TRUNCATE_ONLY in SQL Server 2008, R2, 2012, 2014, 2016, 2017, 2019

BACKUP LOG WITH TRUNCATE_ONLY is a dangerous command: it empties out the contents of your SQL Server’s transaction log without really backing it up.  Database administrators sometimes run this command right before shrinking their log file with a DBCC SHRINKFILE command, thereby freeing up drive space.

Why ‘truncate_only’ is not a recognized backup option.

When you truncate transaction logs, you lose the ability to recover to a specific point in time. You shouldn’t be running this command except during extreme emergencies. Unfortunately, administrators started running it on a regularly scheduled basis, and then they got surprised when they couldn’t restore the way they wanted.

Microsoft recommends that instead of truncating logs, you switch to simple recovery mode instead.  That way you don’t generate logs you won’t be using, and you won’t incur performance impacts from repeatedly filling and truncating the logs.  You also remove the need to regularly back up the transaction log.  This has plenty of drawbacks – if something goes wrong with your database, your only option will be to restore the previous full backup.  You could lose hours – maybe even days – of data.

To stop people from shooting themselves in the foot, Microsoft removed this capability completely from SQL Server 2008.  If you try to use this command:

You get an error:

The only official workaround in SQL Server 2008 and newer is to switch the database’s recovery model to simple as shown in Books Online.  This empties out the transaction log, thereby letting the DBA run a DBCC SHRINKFILE afterwards, then switch the recovery model back to full.

That solution still suffers from most of the same problems as using TRUNCATE_ONLY – the database’s recoverability is compromised.  It’s just as bad of a solution, but unfortunately Microsoft can’t remove that workaround since we do need to put databases into simple recovery mode for other reasons.


Don’t try what you’re about to see at home. We’re what you call experts.  We’ve got years of experience that keeps us safe.  Just like TRUNCATE_ONLY, this solution has a ton of drawbacks and compromises your recoverability.  This should only be done in cases where a log has grown out of control and must be erased or else the system may crash.  In any other situation, you should consider backing up the log with conventional means.

We can fake it by not writing our backup to a real device.  SQL Server lets us use the NUL: location as a backup target, so the following will do a log backup without actually saving the contents anywhere:

Remember, we’re still not fixing anything here: whatever caused the log file to grow in the first place can happen again and put us right back where we started.

Your data is not actually backed up. This is a giant problem.  Don’t leave this script lying around where a junior DBA might see it and reuse it for regular backups.  This should only be used to impress your friends with your useless knowledge of SQL Server, much like I’m doing here.

Other Common Questions About Transaction Log Backups

Q: Why shouldn’t I shrink log files?
A: When SQL Server needs to grow the log file back out, it’s a blocking operation. Everything in the database is put on hold while the log file grows out. We can avoid this for data files by using Instant File Initialization, but that doesn’t take effect for log files.

Q: But I had a one-time log file growth and I swear it’ll never need to grow that big again.
A: Okay, cool – go ahead and shrink the log file this once, but make sure you leave enough log file space for normal operations.

Q: How big should the transaction log be for normal operations?
I generally start at 25% of the data file size. If you plan on rebuilding your indexes, the log needs to be large enough to hold the size of your largest object, plus space for transactions that are happening during your index rebuilds. If your database is dominated by a single large object, then it might need to be bigger than 25%. Plus, if you’re using things like replication, mirroring, or AlwaysOn Availability Groups, you’ll need enough log space to hang on to transactions during replica downtime – until that replica comes back up and can download the rest of the transactions it missed during the outage.

Q: How can I find out if Virtual Log Files (VLFs) are a problem for me?
Run our free sp_Blitz®, a health check stored procedure that catches databases with abnormal VLFs, bad growth configurations, and much more.

Previous Post
Stop Shrinking Your Database Files. Seriously. Now.
Next Post
Blog Better Week: Building Your Blogging Momentum

66 Comments. Leave new

  • Another reason for litespeed and the benefits to having chosen the product for my own installations.
    Thanks, this is pretty cool! (as scary as the topic is 😉 )

  • +1 for the Mythbusters Reference. 🙂

  • Brent – I’m a big fan and love reading your blog but this post is odd to me, for several reasons.

    1) You say that “Microsoft even recognized the problems with [TRUNCATE_ONLY] and removed this capability completely from SQL Server 2008” – is that really why they removed TRUNCATE_ONLY? My understanding was that it was removed because it is redundant now that we have the SIMPLE recovery model. I haven’t heard – I honestly don’t know – but I know of now “problems” with TRUNCATE_ONLY that led to its demise; only that switching to SIMPLE recovery model instantly has the desired effect (thus rendering TRUNCATE_ONLY surplus to requirements). If someone from MSFT would chime in and set the record straight, that would be kinda cool.

    2) There’s no need to use LiteSpeed (though a nice plug) – you can use the NUL: drive: BACKUP LOG MyDb TO DISK=’NUL:’. That has the same effect – it tricks SQL into thinking that a log backup has taken place when, in fact, it hasn’t.

    3) You imply that switching to SIMPLE recovery model is “just as bad of a solution [as using TRUNCATE_ONLY]” yet I would imply that your suggested solution (or my use of the NUL: drive) is a much, much worse solution. By switching to SIMPLE, you are doing it the recommended way. Using the NUL: drive or LiteSpeed is just making life far more complicated than it needs to be. Can you make a case in which it is actually better to use LiteSpeed with -nowrite than to actually go “by the book”? I cannot think of a single scenario in which using LiteSpeed is preferable. In fact, I can’t think of a single “pro”, only “cons”. If the log has “grown out of control” then switching to SIMPLE will clear out the inactive entries nearly instantly (and generally faster than a faux log backup).

    In fact, I’ve covered doing this in a training video in my SQL Server 2008 DBA training video course:

    * “Help! My Transaction Log Has Filled Up!” –

    Like I said, I do enjoy reading your blog and tweets but I think that the SQL community should be aware of (a) the alternatives, and (b) that this goes against the “recommended” way from MSFT.

    • Hi, Scott! Great questions.

      1. I couldn’t tell you what the reasoning was inside Microsoft, but it just appears that way to me and to the folks I’ve talked to.

      2. That’s a great point! That works too.

      3. Absolutely, that’s why I point out repeatedly in the article that you shouldn’t use this trick. I’m just showing people that it can be done. You mentioned that doing it my way is “far more complicated” – keep in mind that my method only requires one statement. Your method requires multiple statements.

      You’re absolutely, completely correct that this method is dangerous and is not recommended, and I go way, way out of my way throughout this article to point that out. If there’s anything I can say or do in here to make it more clear, I’d love to hear it. Thanks!

      • Brent –

        Kudos for replying so quickly – I’m generally more of a lurker than not so please don’t take my only responding with something “negative” to say to imply anything other than this post inspired me to stop lurking! Hopefully we’ll get the chance to meet up at PASS or DevConn this Fall.

        I know that you there is a lot in this post to let people know *not* to do it this way but here’s the thing: the page title and the H3 tags on the page contradict what you’re saying. The page title is “How to Do BACKUP LOG WITH TRUNCATE_ONLY in SQL Server 2008” but this post is not about how to do that; it’s about how to use an undocumented workaround in a third party software tool. Another example is that you have the section on how to use LiteSpeed titled “How to Do TRUNCATE_ONLY in SQL Server 2008” yet it’s really about using LiteSpeed’s -nowrite switch, not about how to quickly mark the inactive entries in the log as free space.

        Now sure – the second sentence of that section says, “Don’t try what you’re about to see at home” and talks about the “cons” but remember: it’s in a section called “How to Do TRUNCATE_ONLY in SQL Server 2008” on a page titled “How to Do BACKUP LOG WITH TRUNCATE_ONLY in SQL Server 2008”. What are people supposed to think who come here from a search for those phrases? What would *you* think if you were a junior DBA coming from SQL 2000 to 2008 trying to figure out what to do and you saw that headline and this page title? You’d probably think, “Well, this guy’s really well known and super smart so I’ll do it with LiteSpeed. Besides, he said something about SIMPLE recovery model but he didn’t show any examples so that must not be the ‘real’ way to do it.”

        So you said in the comment follow-up, “If there’s anything I can say or do in here to make it more clear, I’d love to hear it.” It’s your blog, of course, so I’m only making suggestions but, to me, I’d prefer to see the page title and the title of that section made more clear, something akin to “How to Use LiteSpeed to Simulate BACKUP LOG WITH TRUNCATE_ONLY in SQL Server 2008”. Or keep the page title but add a paragraph that fleshes out using the SIMPLE recovery model and taking a backup (pros/cons/code).

        Thanks for taking the time to listen and keep up the great work!

        • Scott – I love it! Your questions segue perfectly into an article in my five-part series this week on how to blog better. One of the posts talks about how to use page titles and H3 tags. I’ll give you a hint: those tools are for what people are searching for, not to summarize your content. If I put an H3 with “How to Use LiteSpeed to Simulate…” nobody will ever find this post, because DBAs aren’t sitting around asking themselves, “Gee, how could I use LiteSpeed to simulate a truncate_only?” Instead, you have to write H3’s in a way that matches what people are searching for.

          I wish I had a time to do a full writeup on the pros/cons/code of switching to simple mode, but unfortunately I can’t cover everything here in full, so what I’ve done is added a link to Books Online to cover how to do that.

          I applaud your participation! I’m glad you popped up out of the shadows, and you’re making the post better. If I just holler with a bullhorn and nobody responds, then my work doesn’t get any better. (Believe me, hahaha.) Look forward to meeting you!

          • You’ve given me a blog idea, this isn’t the first time I’ve seen the backup to nul trick (of course the others didn’t have 16 warnings on them) 😉

            Think I’ll tackle the backup to nul vs simple recovery, effects side effects and warnings.

      • I have tried the “new and improved” method from Microsoft…and it doesn’t work! I inherited a database whose log has grown enormous. I have tried the “set to simple”, “back to full”, “back to simple”…and the log is still gigantic.

        Like you mentioned earlier, I too am a professional and I know when to use TRUNCATE_ONLY and when not to use it.

        You would think that Microsoft would trust professionals to know when to use something and when not to! If that’s their reasoning they better take away TRUNCATE TABLE before somebody hurts themselves!!!!

  • Nice plug for Scott too BTW.

  • Interesting post… good info from Scott, too.

    This is one example of why I get frustrated with SQL Server. With each new version, it seems like formal training is required (or lots of research online) to keep up with all of the changes. Regardless, this seems like a good change to make for 2008.

  • Brent,

    I followed your recommendation, thanks for the info. Now, how do I restore from the transaction log backup I just performed?


    • You have to open the computer’s cupholder, and put in your recovery CD.

      • Hi Brent,

        I opened the cupholder as you suggested and found a cupcake. Would you like the cupcake? I can send it to you. It has blueberries. Not sure if you are allergic.

        I placed my recovery CD into the cupholder but nothing happened.

        I need to recover my data quickly. What should I do next?


  • Just a note we put the –nowrite into Litespeed for internal performance testing only that’s why it isn’t documented.

  • While I haven’t tested the –nowrite, my gut feel is it and the backup to nul are both way, way worse that switching to simple recovery or (on SQL 2005) backup with truncate only.

    If, on SQL 2005, someone executes a backup log with truncate, all log backups after that fail with an error message (this unfortunately was not true on SQL 2000). Hence it’s pretty apparent that the log chain is broken.

    BACKUP DATABASE [Testing] TO DISK = N’D:\Develop\Databases\Backups\Testing_2005.bak’
    BACKUP LOG [Testing] TO DISK = N’D:\Develop\Databases\Backups\Testing_2005.trn’

    The last one returns the error
    Msg 4214, Level 16, State 1, Line 1
    BACKUP LOG cannot be performed because there is no current database backup.

    A log backup without writing or made to the nul device does not tell SQL that the backup chain is broken. Because it thinks that the log backup was written to disk, subsequent log backups succeed and create files.

    BACKUP DATABASE [Testing] TO DISK = N’D:\Develop\Databases\Backups\Testing_2005.bak’
    BACKUP LOG [Testing] TO DISK=’NUL:’
    BACKUP LOG [Testing] TO DISK = N’D:\Develop\Databases\Backups\Testing_2005.trn’

    The last log backup succeeds.

    If one tries to restore a DB (with logs) where someone has done a backup to nul, no surprise, it fails
    Restore Database Testing From Disk = N’D:\Develop\Databases\Backups\Testing_2005.bak’ WITH NORECOVERY, REPLACE
    RESTORE LOG Testing FROM Disk = N’D:\Develop\Databases\Backups\Testing_2005.trn’ WITH RECOVERY
    Msg 4305, Level 16, State 1, Line 1
    The log in this backup set begins at LSN 1611000000015500001, which is too recent to apply to the database. An earlier log backup that includes LSN 1611000000015000001 can be restored.

    I think it needs to be made crystal clear (anywhere that the ‘trick’ of backing up to nul is mentioned) that it does not have the same effect as Backup Log with truncate only or a switch to simple recovery.

    Hence, if the log really does need truncating, please, please, please, rather switch to simple and back to full rather than playing tricks like the backup to nul and leaving a log chain that looks intact but is totally useless because of a file that was written nowhere.

    • Gail – I totally agree that it’s dangerous, and that’s why I put in bold print in the last paragraph, “Your data is not actually backed up.”

      At the same time, whether you switch to simple mode or if you do this trick, you can’t recover until you do another full backup. Whether you do log backups and they succeed or fail, they’re equally worthless in this scenario. It’s certainly dangerous to think that your log backups worked successfully, and that’s why I wanted to emphasize that your data is not backed up.

      Personally, I like this solution (instead of the simple mode switch) because I could fire this off with a single one-line command, then restart my backup jobs as if nothing ever happened. If I know I don’t have recoverability anyway and that I can’t do another full backup until my normal nightly backup window, this is a way of fixing it right away without having job failure alarms go off all over the place.

      I don’t want to make it sound like I think this whole scenario is a good idea. No matter what you do, no matter which method you take, when you dump the log, you’re screwed and your log data is gone forever.

      • The difference is that it’s far less obvious afterwards that something’s wrong if it’s done this way.

        • Yes, I agree with you there. If you work in a shop with multiple DBAs, you should definitely use the recovery model route just so everyone knows what you did.

      • I know this is an old post, but I want to verify something. You say that there is no way to recover without a new full backup. My understanding is you can use even a differential backup to restart the chain without the need for a full. Is this not correct?

        • Robert Carnegie
          March 14, 2014 5:47 am

          Here’s why: Transaction log backups are useless with simple recovery. In simple mode, transactions aren’t retained in the transaction log, but are kept only until the transaction is executed. (Any alteration to data is a transaction.) So there is nothing in the transaction log that you can back up.

          If you have time to make as many full backups as you want to, and accept that any data not in that backup is lost in your disaster scenario, then do that. Incremental or differential backups are only if you need more, or to pass some Database Administrator exam.

          With full recovery, you can get back all your data, even if I steal the hard drive with your data on it, but leave you the disk with the transaction log on. And even if you only have your full backup made a month ago.

          Brent is appalled by the data we lose by simple recovery, but it’s not his data, so what does he care. More an issue for me is that “simple” didn’t work before SQL Server 2008. Your log file just grew and grew anyway.

          • Robert – thanks for the comment, but that’s actually not correct.

            “With full recovery, you can get back all your data, even if I steal the hard drive with your data on it, but leave you the disk with the transaction log on. And even if you only have your full backup made a month ago.”

            Actually, no. Let’s imagine a very simple scenario with a database that has exactly one table, and exactly ten records. For easy math, let’s say that each record is exactly 1MB in size. Our database size is 10MB, and just for pretend sake, let’s say our log is 3MB in size.

            Step 1 – Take a full backup.
            Step 2 – Insert three rows. That three rows worth of data is inside my log file.
            Step 3 – Back up the transaction log, thereby marking all 3MB of log file space as available for reuse.
            Step 4 – Insert another three rows. That three rows of data is now inside my log file, having overwritten the previous 3 log records.
            Step 5 – You steal the hard drive with the data, leaving the log file drive behind.

            How am I going to retrieve the records from step 2? That data isn’t in the log file.

            I’m using an overly simplistic scenario here to illustrate a point – the log file contents don’t stick around forever.

            Also, I’m not sure what you meant by “simple didn’t work before SQL Server 2008” – I’ve worked with plenty of SQL 2005 databases where simple works just fine, and the log file doesn’t grow forever. Can you point to any resources or repro scripts that support what you’re suggesting?


          • I concur with Brent. I have been using Simple recovery model since SQL 2000 (been using this DBMS into the darker past than that…) and have not had the issues described. 2005 was absolutely fine in TLog management….
            As for only using diff recovery to pass an exam, I have had several examples where simple recovery is all that was needed, but a nightly full backup of a huge database was untenable. Therefore weekly full and daily diffs did the trick!

          • Robert Carnegie
            March 16, 2014 10:21 am

            Mine seems to the the last post with a “Reply” link. Huh.

            I guess I’m out of date about simple recovery, although people can still have trouble with it, such as if there’s a transaction not committed, or while backing up the database. And then there was this
            although apparently that’s a problem in SQL Server 2012 that was fixed in Service Pack 1. I don’t know if that problem existed before 2012.

            Way back – so maybe in SQL Server 7 or 2000 instead – I was told, and believed at the time, that simple recovery just wasn’t reliable at all for avoiding growth of the transaction log. I may have missed someone saying “Yay, it works now!”

            I count myself as not fully understanding good backup (including restore of course) and transaction log policy. I’m not in control of our servers on 2005 and we don’t even have enough disk space for one day’s transaction logs, so we perform several log truncate operations while loading data. I have protested about that. But, is there not a way to use the data backed up in Step 3, plus the log? No? Well, never mind. Also, I stole the disk to make a point, but hard disks do just break.

            You do need to know about differential and incremental backup to pass the exam, and they have a place when you don’t have time or resources for full backups taken as often as you need backups, but you are going
            to have to take a full backup sometime, and you can do it on a live server, and another backup strategy makes your backups more complicated and more time-consuming to use. And you may be depending on someone who is much less invested in the process than you are, to put in the right tape for a backup at the right time. You’re making it more complicated for that person, too. So it’s a lot harder to do in real life than in the exam. You need to understand it very well before you undertake it, and I’m confident that a lot of the people reading here do not. I myself assumed that differential backup can only be done by backing up a full-recovery transaction log. So maybe I wouldn’t pass the exam.

          • Can’t reply to Brett directly. No link. Sorry. (and yes, my new hobby is “replying to ancient internet comment threads.” so sue me.)

            Brett, in you 5-step example, you ask “How am I going to retrieve the records from step 2? That data isn’t in the log file.”

            The answer is, you’re going to restore it from the Transaction Log backup that you made in step three. Your recovery path is:

            a. Restore the full backup from step 1
            b. Restore the T-Log Backup from step 3
            c. Restore the T-Log

            You should now have all 16 records.

            I also have to quibble with step 4, though.

            >> Insert another three rows. That three rows of data is now inside my log file, having overwritten the previous 3 log records.

            Not really overwritten. Microsoft, in their infinite wisdom, has made sure that diskspace used for the first three records is still allocated to your T-Log file. Thus, after step 4 the T-Log file is twice as “full” as it was after step 2. More will be added to it when you add more records tomorrow. When all of the space allocated is used, SQL will request more disk space from the o/s.

            The T-Log will thus continue to grow until runs out of disk space. (Yes, it has happened. It has happened to me.) At this point, with nowhere left for your T-Log to grow your SQL Server simply stops serving SQL (and your users start making urgent phone calls).

            This is why the SQL backup plan must include the occasional T-Log Backup, Truncate and Shrink.

          • Tom – that is simply incorrect. The space in the log is marked as available for reuse when the transaction is complete and has been backed up.

            In any situation where you don’t believe that’s the case, read this:


  • Gail,

    You are 100% right. The –nowrite is the same is backup to NUL: SQL Server thinks there was a valid backup taken where a truncate_only or flip to simple is a known quantity and changes the recovery point for the database.

  • I’m still sitting here laughing at this whole thing. Everybody’s missing the point of the blog post. Ordinarily, I never would have even posted this technique in public, but when I saw that SQL Server Magazine publishing an article on how to shrink your database files without so much as mentioning that there might be drawbacks, I couldn’t sit idly by. Editors have a responsibility to filter dangerous information and make sure that it either doesn’t get published or that it comes with plenty of warnings about the dangers involved.

    Go through this post and count the number of times I warn the reader that there’s drawbacks and dangers. I even put in bold print, “Don’t try what you’re about to see at home.” How much more specific can I be? Do you want me to use the blink tag? Have an audio soundtrack on the page with screams of terror?

    The point of the post is to show how to provide dangerous information. If you absolutely have to provide dangerous information, do it with tons of warnings. Let’s not get sidetracked by the fact that it’s dangerous information – we all agree it’s a crazy trick.

    • I got your point, just adding additional warnings about a facet of this that I thought wasn’t quite clear enough.

    • The point is muddled – let’s just call it muddled since I’m about to go muddle a nice mojito. Your page title and the H2/H3 tags say one thing while your paragraph text and comments say something else. And since we all know that search engines l-o-v-e page titles and H2/H3 tags, this post is bound to rank for anyone search for “TRUNCATE_ONLY in SQL Server 2008” – and if they aren’t careful, they could have picked up some of that “dangerous information” you mentioned!

  • And my point was you are using an undocumented feature of a third party product in a way it was never intended for and why it is in the product.

    After that I was just confirming Gail’s assumption about the –nowrite.

    I almost put the same thing in my first comment but didn’t think it was completely needed since you mention several times that your database will be FUBAR’ed if you do this.

    Oh, one last thing.

    This trick is extremely dangerous…. 😉


  • Craig O'Brien
    March 12, 2010 10:00 am

    Just wanted you to know, I found this stream of argument to be hilarious and petty, like an episode of Seinfeld.

    My company is small we have no DBA; I handle the tasks a real DBA probably would at a larger company. Even I knew enough to not us truncate_only in production. I did once years ago in an emergency, but immediatly did a full backup.

    We did use truncate only regularly on dev databases to save space (ususally after restoring from production.) We don’t need those backed up at all.

    Thanks for the BACKUP LOG @dbName TO DISK=’NUL:’ it is perfect for my need, and I wouldn’t have found it if not for the heading and if not for the reply so thanks to both Brent AND Scott.

  • Its been a few months and we’ve all had a chance to simmer down…

    I for one am happy to have found the code tidbit “TO DISK = ‘NUL'” to add to my error generating “BACKUP LOG MyDatabase” – I am testing TDE and having to backup. restore, encrypt, unencrypt.

    I can tell you all that many DBA’s that inhabit corporations of America have broken the chain of backups with the “truncate_only” and caused jobs to fail…some may have never known…

    This tidbit is useful in a testing environment with backup jobs/maintenance plans, when backups are made not to recover data from disaster, but to test encrypted backups, encryption options, compression, etc.

    So I would add to all the warnings and other thoughts that the LiteSpeed trick and/or the NUL disk trick are good to know for the well-rounded DBA, kind of like shooting a snake with a six-gun.

  • I want Single table backup in a particular database
    in sql server 2005, its possible r not

    • Usman – not easily, no. You can put the table on its own filegroup, and then back up that filegroup, but it’s not necessarily easy to restore just that one table by itself.

  • Hi

    I have only client operating system (windows xp) but i have sql server 2005 in that system ,so how to config the log shipping my system sql server to other system that is also client thats having sql server.please give the step by step process if u having screen shorts

  • Hi,

    Simple mode in sql server 2008 sp1 cum patch 9 does not work.
    As you said:

    Microsoft recognized the problems with TRUNCATE_ONLY and removed this capability completely from SQL Server 2008.

    and it is a big deal to shrink \truncate trans log file with Gs initial size

  • For SQL 2008


    Alter Database Set Recovery Simple

    Alter Database Set Recovery Full


  • So I backup transaction logs several times daily, do a nightly backup, then on my weekly maintenance after the weekly complete backup and reindexing I truncate the log because the maintenance makes it so large before I start the weekly routine again. Is there a better option? The maintenance creates a log that consumes my disk and the boys in IT hate to allocate more SAN…

    • Brent – if you keep using the log space every week, why do you keep shrinking it? You’re going to need it again next week for the next maintenance window, and growing the log file is a dead-slow operation in SQL Server because Instant File Initialization doesn’t work for the log file.

  • Mohammed Muddassir
    April 14, 2011 2:18 am

    Hi Brent,

    So, the conclusion or actual (as assumed) solution would be to set the recovery to simple, back to full and dbcc shrinkfile?
    I got an MSS 2008 R2 db hosting sharepoint, OCS and few other infrastructure services. The log disk is 102 GB and its almost full. Fortunately, our server admin gave us 2 more gbs of space. One of my friends suggested me to do dettach thedata file and log file and move the log file to a new drive or disk followed by a full restore after the db backups. What is your take on it?


  • Brent, thank you for the blog, you make great points, often…

    I have to say since our transition from SS2005 to SS2008R2 I sorely needed the DISK= ‘NUL:” parameter to fake out SS (as was also recommended by Scott W much earlier) for my data movements to work. Without it I often found my db becoming overwhelmed by the large data moves we deal with.

    I have been working through a data warehouse staging system where we load, cleanse and transform data before moving to warehouse and analytic facts and dimensional objects.

    One of the great features of the TRUNCATE_ONLY option was for me to keep the staging logs fit and trim, since of course the data is coming from someplace and going to someplace else.

    In my experience the big no-no’s of lost logs has everything to do with OLTP or finalized OLAP db’s and little with transitional data bases fed by a consistent source and feeding data to yet a different consistent target.

    Is the kind of work I’m doing that much of the exception or am I thinking this wrong…. thoughts and recommendations?

    • JB – the work you’re doing is usually called ETL or ELT, and that work is not usually done in the same database as production. When you need to load staging data, use a separate staging database and leave it in simple recovery mode.

  • Let me give an example of what has really got me peeved!! I restored a copy of our production database onto a development server. When restoring the backup, the log file is 68GB in size. I don’t want nor do I need a log file – especially one that large on a development box.

    I have changed the mode from FULL to SIMPLE. I then did a DBCC SHRINKFILE of the log file. The log file is still 68GB in size. I have tried this method about ten times – the file is not shrinking.

    I’m so glad they got rid of TRUNCATE ONLY…

  • Thanks for all of the above. However would anyone have an idea of how to shrink tempdb log files? I say this because one cannot run a backup on tempdb, full or simple. A current problem we are having is the tempdb log fulls up and crashes one of our import procedures when moving the data from staging to live. Running on SQL server 2008 R2.

  • Just so it’s to hand, we use the following script to ‘mimic’ truncate_only:

    dbcc shrinkfile(”)

    (We have copies of production systems used by our customers for training, test-and-trial, etc – which don’t need the log, but which we keep as mimics of the live environment – hence at least one ‘legitimate’ reason for this recovery-but-no-recovery situation.)

  • When bulk loading data into my SQL database I like to clear the log after each table is loaded.

    Thank-you for the ” Backup LOG MyDB TO DISK=’Nul;’ ” command line.

  • Below commands works fine


  • Hi there everyone,

    Should I understand is so that when MSSQL server does a backup of the logfiles, it does truncate the files as part of the backup?
    If truncate_only is bad for recovery, how should we address log_files that grows huge over a month or so?
    We run a backup software on the server, backing up log files every 1 hour, and a full backup every night, but still the log files grow very big. (86GB +)
    We need to somehow truncate/shrink these files regularily.

    I would appreciate any feedback on this.
    Thank you.

  • Nguyen Thanh Quan
    October 12, 2013 2:21 am

    I enjoy reading your blog. Thanks.

  • AlwaysON issues….
    I have been mulling over several approaches to an issue I have, and wondered if you could give me some of your insite…. having seen you at several SQLBits etc I am sure you will have many comments 🙂
    I inherited an AlwaysOn system that will be using an ETL system to pull a large amount of data (relative to the disk capacity…) into a staging DB before moving it on to a transactional DB (I know – long story – some transactional data needs to be available for other people to update etc etc…)
    The Migration DB Trans Log just keeps growing! I can back it up on the Primary, but it makes no difference…even though the always on is in sync.
    How do you manage the trans log on an alwayson server to ensure it checkpoints and, if not shrink, at leats re-use the existing logspace?

    • Tony – sorry about the delay on this! I somehow missed your comment. I’d start with a SELECT * FROM sys.databases, and check the LOG_REUSE_WAIT_DESC column to find out why the log isn’t available for reuse.

  • Brent,

    Quite a while back in this discussion, there was debate over the use-case for the NUL: method . At my site, we use it control the trans log on our testing/dev servers where mirroring is enabled for Production parity.

    Essentially, our test/dev databases must be in full mode to support the mirroring configuration but we have no need for backups. Left unchecked, the trans log growth can (and often does) exhaust space available. A simple scheduled ‘backup log [database] to disk=’NUL:’ avoids the trouble and allows us to keep test/dev environments as close to Production as is practical.

  • Well, our synchronous mirroring affects transaction hardening times (redo log, etc…), so keeping it in place in testing, we reduce unexpected performance quirks (think algorithm time/blocking). Plus we run reporting and auxiliary applications from a 24hr and an hourly snap created from the mirror. We maintain this paradigm in the testing environments so that data timing is consistent. Also, we use the environment to sort the operational procedures in our UAT (system/sql updates/upgrades).

    Finally, I’ve had long philosophical debates with our very talented DB arch/dev which eventually led to me arduously retarding the commit times on the mirror which backed up the primary. So I’ve been toeing the line on keeping it in place.

    Enjoy the site, keep up the great work!

    • John – while synchronous mirroring affects transaction times, so does load from other queries. I’d argue that without the load, the mirroring alone isn’t going to replicate production – and in fact, you might find that adding transaction log backups gets you closer to production load.

  • Stephane Maquet
    June 15, 2016 8:51 am

    The option TO DISK=’NUL:’ is a very good solution for my specific usage.
    But I cna’t make it work within the sp master.dbo.xp_sqlmaint.
    Do you know any trick to have the same result using sqlmaint?
    I’m running SQL2008R2 and soon migrate to 2016.

  • Using SQL Mirroring, my DB servers are in recovery mode full… (mandatory to mirror databases).
    Thus it can’t be in simple mode and We have to backup the Tlog in order to avoid a growing file size.
    Actually, this is what we are doing. But the tlog backup file are not needed (for us).
    On 8 years, we never used the tlog file to restore a database. The full backup file is enough for our usage.
    So, these tlog backup file are using disk space for nothing. And I searched for a way to have the benefit of the backup (reduce the tlog size after a checkpoint) without writing to disk.
    The option TO DISK=’NUL:’ was a very good solution for us… But our maintenance plan are based on xp_sqlmaint (for other technical reasons, mainly because of the deletion and naming options).
    So, I’m just curious to know if this ‘trick’ TO DISK=’NUL:’ can be used with xp_sqlmaint.
    If not, we’ll keep the tlog backup file for nothing.

  • Thanks for the work around! I’m fully aware of “why not to truncate the log” or (my favorite) don’t delete the ldf file. But sometimes ya-gotta-do-it. My situation was for testing purposes. We backed & restored the Prod system so that we could experiment with purging a metric s-ton of old data (100’s of GBs from a TB+ of data). So we needed to track log growth and timings (including load/locking on interactive users). Two questions I sought to answer: How many rows can we delete without causing the Log to grow? Which led to How Many extra TX log backup operations will need to occur during the purge process?

    The restore process bought across pending logs that I needed to “backup with truncate.” Plus after trying different options I needed to “purge” the log and try different methods, thus collecting alternate facts.

    When we’re done with this db it will be destroyed. So I don’t need the TX logs. Nor the backups. It was a place to play.

  • Hi –

    I apologize for not reading the full back and forth of the “should you have or shouldn’t you have” but came across this article while researching the following situation.

    On a Dev Server, a database log file grew to the maximum size of two terabytes and also basically filled up the drive (less than 6 GB free). The person who initially responded, dealt with it by adding another log file to the database on another drive.

    Since then we freed up space on the drive and then tried to shrink the log file itself to reclaim the 2 terabytes of space but , neither changing the database to simple mode, or backing up to null, or detaching the database and re-attaching it has allowed use to shrink the log file. We are in the process of dealing with the issue by copying the data to another database, whereupon we’ll drop the problem DB and rename. But I would like to know if anyone has any ideas of why we can’t shrink the log file at this point and what else could have been done.

  • The “NUL” or “NUL:” file is rejected in backup wizard in sql server 2019 (OS error 0x80070057 SvlPathDoesPathExist, SQL Server error 3634). Is it possible to avoid that?


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.