I had sworn to myself that if I saw one more helpful article about how to shrink your SQL Server database files with DBCC SHRINKFILE or how to back up your log with TRUNCATE_ONLY, I was going to write a rant about it.

Epic Advice Fail

Epic Advice Fail

SQL Server Magazine just tweeted about their latest article, a reader-submitted solution on how to shrink your database files with ease.


To make matters worse, this particular article is by a Microsoft DBA and doesn’t include a single word about the problems involved with shrinking your database files.


Don’t shrink your database files just to free up drive space.  Stop.  It’s an unbelievably, disgustingly, repulsively bad idea.  Your disk drive space is for files, not for ornamentation.  You don’t get bonused based on the amount of free space on your drives.  Empty files don’t take longer to back up.  And so help me, if you find yourself shrinking databases so often that you have to automate it, you need to cut up your DBA card and reconsider your choice of career.

I’m not going to reinvent the wheel by telling you why.  Instead, I’m going to point to half a dozen posts explaining why this advice is just a flat out epic fail:

I feel bad going nuclear on this article, but I’m not just venting about the author.  This kind of advice shouldn’t clear any kind of SQL Server editorial team either.

Coming next month: “How to Reduce Your Backup Times with the Truncate Table Command!”

Learn More About Why Your SQL Server is Slow

sp_Blitz®: Free SQL Server Health Check – You’ve inherited a SQL Server from somebody, and you have no idea why it’s slow. sp_Blitz® gives you a prioritized list of health and performance issues, plus gives you URLs for more details about each issue.

Our Free 6-Month DBA Training Plan – Every Wednesday, you get an email with our favorite free SQL Server training resources. We start at backups and work our way up to performance tuning.

SQL Critical Care® – Don’t have time to learn the hard way? We’re here to help with our quick, easy 3-4 day process that gets to the root cause of your database health and performance pains. Contact us for a free 30-minute sales consultation.

↑ Back to top
  1. You forgot this one, where I complained about the exact same thing you do: people who show people how to shrink their files but conveniently forget to tell them why it’s bad and why it should be reserved for emergencies only:

    Oh, the horror! Please stop telling people they should rhink their log files!

  2. Hey it could be worse. The article could be telling people how to free up space by deleting the transaction log.

  3. Sigh. That’s all I can say. Time to make my new moniker a reality, you all know my ‘involuntary DBA’ – now I give you ‘incompetent DBA’. Sigh.

  4. Unbelievable that editorial control wasn’t exercised there.

  5. These articles make me shrink in terror! Maybe in the next edition Microsoft can deprecate the shrinkfile capability.

  6. Dude – I couldn’t even deprecate auto-shrink, and I used to own the Storage Engine before I left MS. Don’t hold your breath for that one.

  7. The real problem is not the *shrink itself*, but HOW the SQL shrink command works (move the very last single page to the very first available open slot, repeat), causing table and index fragmentation.

    I still think someone needs to write a “smart shrink” utility that attempts to free space from the end of a SQL data file while *keeping it in the proper page order*, thus avoiding fragmentation (as much as possible). Not sure if this could be a third-party utility to have access to that level of internal data file detail, or if it would have to come from the MS SQL team.

    Sure, still might have some potential for mis-use, or over-use, but at least if we do run into that occasional unexpected disk space crunch, we’d have an option that doesn’t obliterate our performance. (I know, proper planning, blah blah blah. Sometimes stuff happens.)

    • Brad – no, that doesn’t completely solve the problem either. When you shrink a file, it’s usually because the database NEEDED that space at some point. When the database needs it again, you incur a serious performance penalty while the OS reallocates the disk space. Instant File Initialization mitigates that for data files when it’s set up correctly, but not for log files. Even with IFI, the file’s newly grown portion is rarely contiguous with the rest of the file. Presto, fragmentation. Combine this with SQL Server’s default growth settings of 1mb for data files, and you’ve got a fragmentation disaster.

      Since there’s no way to fix fragmented data files while the database is online (only with backup/restore or disk defrag utilities while SQL is offline, and those options still suck) the DBA shot himself in the foot.

      • Point taken, Brent, and I’m aware of all those issues.

        Still doesn’t mean that we don’t encounter this need all the time: the application team that finally purges the 10-year old data we’ve been bugging them about; the rogue developer that tries to copy a 300m row table to “see if he can reduce blocking”, the 700GB database that needs to be chopped down to 50GB after restoring to the DEV environment, the pager going off at 2:30 in the morning because there is only 9.9MB free on your data drive; the analysis server that alternates huge jobs between two huge databases, but with only enough “working space” to grow one at a time; the “archive only” database that has 250GB of internal free space, and the business unit that gets charged by the GB for storage, etc…

        None of these are “good”, certainly, but the fact is that these are daily realities, at least in my environment. If we had a “better” shrink utility, we *at least* wouldn’t have to deal with the shrink-increases-fragmentation issue, although the other items you mention are still something to remain aware of.

        I guess I’m really advocating (mostly) the same thing you are: DBA knowledge and awareness of the problems/issues with shrinking, and a careful and selective application of it only when needed. Absolutely, the article you link does *not* contain the level of awareness and warning that should be included.

        As much as I respect Paul R, his “never, ever, ever shrink data files, no matter what, ever” philosophy, while laudable, is simply not practical. What are you going to do, tell your paying client, “Sorry, I know you are running out of space for your database BigData, but I can’t run a shrink on BigData_Archive to keep your server up and running. The drive will fill up at 2pm, and your website will be down. Thank you for your business, your invoice is in the mail.”

        • Brad – again, if you’re doing daily shrinks, I would argue that you need to address the problem, not the symptom.

          • Sure, I’ll agree to that. How about, “Use data shrinks only in highly exceptional circumstances, only after considering other better options, and only with full awareness of the problems it can cause.”

  8. on editorial control… I just took a look at the masthead and of about 50 names, I count 26 names alongside titles that include some form of “editor” (one of those names appears as an author of one of the posts you list).

    I don’t know much about the journalism industry but it’s hard to figure out who dropped the ball.

  9. Brad

    In your Scenario:

    “the application team that finally purges the 10-year old data we’ve been bugging them about; the rogue developer that tries to copy a 300m row table to “see if he can reduce blocking”, the 700GB database that needs to be chopped down to 50GB after restoring to the DEV environment…”

    In (most) situations like this something “major” has happened to reduce the used database space so much.

    If you have a database that goes from 700 GB of data down to 50 GB of data wouldn’t you rather create a shiny new database (sized appropriately) and just transfer the 50 GB of data there rather then delete/remove 650 GB of data and shrink (and fragment) your database?

  10. I don’t say never, ever to do a shrink – I say don’t do *autoshrink* ever and try as hard as you can to avoid doing a shrink, unless there’s an overriding reason to do it (e.g. deleting 500GB from a 1TB database and then making it read-only forever). You can do a shrink by rebuilding indexes into a new filegroup, and then dropping the old one – or if you have to do a shrink, make sure to tidy up afterwards using alter index reorganize.

    My blog post at clearly says all of this – I think you maybe misread autoshrink for shrink.

    It’s just not possible to write a smart-shrink outside of the Engine. I had a design for a smart shrink for 2005 but it was canned as not being important enough. It won’t ever happen IMHO.


  11. I’m glad that you posted this. Being a relatively new DBA, I was unaware of what I was doing. I perform reorganize index operations on one of my production servers every week and my largest database always experiences a lot of transaction log growth. To mitigate the problem, I had to constantly shrink the log file and back up with truncate only to save disk space. This was necessary because my company didn’t have any other place to store my SQL backups, so they were being stored locally and I risked consuming all of the disk space. Luckily, we just bought a SAN and I’ve got all the disk space I need. So, I’ll let my log file grow from now on and I’ll stop truncating.

  12. As an Oracle DBA, resizing tablespaces is something that is normal to do as an administrator. And I think managing the allocated space for a database is something that should be done.

    I’m not suggesting that this should be a normal, automatic process… I would just expect there to be a better tool for resizing allocation for a data file in case it had to be managed. For example, I began to support SQL Server at a company I worked for in the past. A database was in Full recovery model but was not being backed up. The transaction log file grew to be about 100 times the size of the database. What would you do in this case, if shrinking the file is such a bad practice?

    • Pepe – the problem there isn’t the shrink, it’s NOT BACKING UP YOUR DATABASE, hahaha. First things first – start backing up the database regularly, then shrink the log file once and be done with it. You don’t need a fancypants automated tool to fix the shrinking problem there. You need to be more aware of the very basics, like making sure the databases are being backed up. Concentrating on the shrink part is the symptom – you need to fix the underlying disease.

    • Even if you are running proper daily “full backups”, when a database is in “full recovery mode,” it needs frequent transaction log backups performed. That, or switch it to Simple recovery mode.

      • Agreed. And if you’re doing regular transaction log backups, don’t shrink your transaction log file just to free up space on the drive. Your LDF files are growing because the SQL Server needs the space, and it’ll just keep regrowing the files again and incurring a nasty performance penalty each time.

        • Especially if the log growth is set to the horrible default (10%). That means each time it grows, it costs a little bit more. And all transactions have to wait for it to finish. Enter Jeopardy! theme music.

    • Agent_Pepe, nobody is saying that there aren’t anomalies and that you should never, ever, shrink any file. Your case was definitely an issue, but once the issue is corrected (e.g. start backing up the database and the log correctly, and then adjust the file to an appropriate size), the problem goes away. The main problems highlighted are:

      1. shrinking is often a knee-jerk reaction to something that is actually normally expected of the database (to grow!), and trying to shrink it to gain disk space back temporarily does more harm than good. If your data file is outgrowing the disk you’ve allocated, the solution is simple: get more disk. Obviously there are situations where it may be necessary (including those mentioned by Brad), but shrinking a data file should be an exception and not the rule.

      2. databases should NEVER be set to auto-shrink (I am not sure why that capability even still exists today; too bad Paul didn’t have more pull back in the day)… this also goes for your own home-grown jobs that automate file shrinking, not just the stock checkbox on the database properties dialog.

  13. Wow, usually I enjoy reading your perspective on SQL, but it sounds like you need to take a breather over this.

    In an ideal world, all databases should be backed up normally and this shouldn’t happen. But haven’t you ever been pulled into a situation where a DBA that was previously responsible for a database hasn’t been doing their job? And you told me to backup the database, then shrink the log file. I thought this whole article was about how shrinking files was such a terrible idea.

    I think we agree on the idea that this should not be the solution to a problem. But this is something that has to be done in certain situations. And it doesn’t always work. That is why I think that shrinking data files in SQL Server should go much more smoothly than it currently does.

    • Again, as has been stated several times, NOBODY IS SAYING “NEVER SHRINK FILES!” – they are saying, don’t do this automatically, and only do it when you have an emergency of some kind. The SQLMag article that spurred Brent’s post in the first place suggested implementing a stored procedure that went through all of your databases and ran shrinks against the ones that his logic deemed “could use shrinking.” This is just ludicrous.

      • Understood. Then maybe the title of the article and most of the contents should be rephrased.

        I just haven’t always had luck with the Shrink utility when it HAS to be done, leading me to believe that it doesn’t work like it should. I’m not trying to sound like a jerk here… I always appreciate the material on this blog. I just think that Microsoft should make this work a little better than it does.

    • Right, I’m with Aaron. There’s nothing wrong with shrinking files once. *Once*, when you take over the database if it’s been horribly mismanaged.

      But building an article about how to automate shrinks more often and more regularly, that’s a problem, especially without any mention of the aftereffects. It’s like saying, “Here’s how to cut all the trees down in your yard as fast as possible” without talking about the basics of making sure the tree doesn’t fall on your house, or talking about when you should and shouldn’t cut trees down to begin with.

      • In fact – I would even go as far as to say that in the situation Pepe describes it is better to shrink that log file… (ONCE).. Get into a backup strategy that actually gives you the benefit of being in full recovery mode, perform your log backups, get the used size of the transaction log down.

        Then shrink the log file to the smallest size possible and then grow it ONCE to the size you believe it needs to be based on your recovery strategy, data volumes and log backup frequency.

        Doing this will help you control your Virtual Log Files which could have a performance impact.

        A log file that is grown in small chunks frequently or that spent its life in a shrink-grow-shrink-grow relationship with an absent minded DBA will tend to have a lot of VLFs and this may impact performance. I guess less so in more recent versions but it is still a practice I follow.

        98% of the time when I see shrink being used or suggested it is for dead wrong scenarios. All too often it is after someone says backup your log with no_truncate to clean up the log file from a Full Recovery mode database with no log backups… The right answer there is to pick a recovery model that matches your recovery needs and then do backups that match.

        Funny timing, Brent… I just got done banging my head on the desk here due to a new instance I inherited from our sysadmin, involuntary DBA…

  14. Pingback: How to Do BACKUP LOG WITH TRUNCATE_ONLY in SQL Server 2008 | Brent Ozar - SQL Server DBA

  15. Just bumped into this. Love it!! You forgot to mention the fragmentation that is introduced by shrinking the files. That shrink command isn’t moving extents around, it is moving pages. Try running your style of a fragmentation report before and after a file shrink and see what you get.

  16. What would webmasters do if database size is limited in hosting services? For example: 120MB before shrinking and 18MB after.

    • A-K – great question! Typically hosting companies will set the growth and max size numbers for a database so that it can’t outgrow your hosting plan limits. Your best bet is to pre-grow the data file to your full size limit and then you’ll never have to worry about growing or shrinking.

  17. Let’s not forget, however that shrinkdb is a usefull tool despite the fact that it is often abused. I use it whenever I migrate databases from production to qa and dev less they don’t fit. I’ve also used it after partitioning to reduce the size of backups.

    Also it seems that this whole concept of auto growing files until you fill the disk and you run out of room is strange to oracle. The oracle guys at work don’t understand that since best practices seem to be to allocate all the space that you possibly want for the file and lock it down from autogrowth whatsoever. Obviously this would mean you woulodnt shrink either. I’ve used this strategy often, it’s ideal for tempdb.

    • Sean – I agree with you on most of it, but I’m curious about this one:

      “I’ve also used it after partitioning to reduce the size of backups.”

      I’m curious – do you use any backup compression products like LiteSpeed? If so, they don’t back up empty space in the database, so it won’t matter whether the database is shrunk or not – the backup will still be the same size. I haven’t done a native backup of a large database in quite a while, so I’m not even sure if this is true with native backups either.

      • Brent, take a look at my resonse to GilaMonster’s post below. In this case we are talking about optimizing a database for network copying and restoring remotley. After some tables were ‘partitioned’ (i.e. relocated) there was space in the database left on mixed data pages. Shrinkdb does fix this and will make for a smaller backup.

        I cant get my infrastructure guy to spring for litespeed and 2008 is still a year a way for us, so no I dont the benifit of streaming compression during the backups. Instead there are ugly scripts that gunzip them after the backups are complete. =(

    • How does shrinking reduce the size of backups? A backup only backs up used space, free space within the data file is not included in the backups.

      • I knew this question was next. =) In my case, I had migrated some tables to a new database, replaced them with a view that points to the new db and then shrinkdb. It was useful in this case to rearrange the data pages after large deletes. This turned into a sort custom partitioning scheme on 500 gb db that is backed up and copied to a test environment periodically. This provided a smaller db footprint, and yes a smalller backup that was easier to copy over the network.

        You are correct in that a backup does not include the empty space at the end of the file. That is true. But after large deletes where free space is now scattered over mixed data pages, only a shrindb will free this space. And without it , yes your backup is larger.

        • Mixed data pages or mixed extents?

          It’s not just space at the end of the file that will be left out, it’s unused extents anywhere in the file. Only way you’ll get space saving in a backup after a shrink is if you have lots and lots and lots of partially allocated extents (blocks of 8 pages with < 8 pages used by the table).

          Between the shrink and the index rebuild that you need to do after, you're probably spending more time doing the backup than if you just backed up normally.

  18. Sean – what do you mean by ‘mixed data pages’? Shrink does not *ever* change the contents of pages that it’s moving – except in the single case in 2005 onwards where it will compact LOB column storage space potentially. It will not reclaim space from data pages that have free space on them. Trust me on this – I wrote the DBCC code.

    Do you mean that it reclaims empty pages from extents, thus compacting the table/index? Yes, it will do this, but at the expense of causing horrible index fragmentation.

    To address another point – a backup not only does not include empty space at the end of the file, it also does not include unallocated extents spread throughout the file. An extent is only included in a backup if one or more pages in the extent are allocated.


  19. Hi Brent

    OK, I’m convinced. I’ve removed all the Shrink database statements from all my maintenance plans. And when I check the DMV, I see that the indexes are indeed horribly fragmented.

    Now I have a problem. My senior DBA says I should leave well alone – no-one is complaining and if it ain’t broke, I shouldn’t try to fix it. Plus unless Microsoft says that Shrink Database is a bad idea, he doesn’t believe it. Can you point me to a MSDN source to back yourself up?

  20. Pingback: Resolving Very Large MSDB –

  21. Pingback: Making the best use out of Mainenance Plans on SQL Server « Raymund Macaalay's Dev Blog

  22. Pingback: Minimize your TFS Backup size « FreeToDev

  23. I just had another vendor that I work with suggest that I really should have a sunday maintenance plan that will go out there and shrink the database.

    That being said. I am in the process of shrinking the database. Howerver, it’s for the only valid reason I know. To reclaim space that will never be used again. I have enabled data compression on the tables and have freed up 84 GB…120 GB to 40 GB being used. However, now I’m wondering how long it’s going to take to finish the shrink process currently at the 3.5 hour mark. Thank goodness this is a test system. I saw a similar post out there that took someone else 1 day 20 hours and some odd minutes.

  24. Pingback: SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance « Journey to SQL Authority with Pinal Dave

  25. Brent, when are you going to write the “How to Reduce Your Backup Times with the Truncate Table Command!”. I bet a lot of people will love to read that one :D

  26. I routinely have to shrink my database in order to sign-in to a specific program [i.e. Oracle Primavera P6]. So, I was scouring the internet to find ways to automate this process when I ran across your article. If I don’t shrink my database every two to three days, I literally can’t sign in to the program. Now, I’m very confused.

    • Carter – what’s the problem that happens when you can’t sign in? I’m guessing you’ve got the database in full recovery mode and you’re not doing transaction log backups, so the log file fills up and the database stops. In that case, you’d either want to start doing transaction log backups or put the database in simple mode.

  27. My knowledge of SQL Server is scant. However, I am working on a data warehouse where all the data will be replaced every night (long story, not relevent to this topic).

    I asked our dba if this will cause the database to bloat, like MS Access would. He suggested that it would, in which case shrinking the database would have merit.

    What is your opinion on this matter?

    • Hi, Dan. That’s a totally valid question, but the data churn rate won’t affect this issue. If you need X amount of space every night, then you want to pre-grow the data and log files to the correct size, then have them grow in appropriately sized increments, and leave ‘em there. The bigger problem for SQL Server is the repeated growth and shrink.

  28. It seems to me the SQL db structure should be re-engineered to self grow and shrink properly/efficently. Perhaps in another couple decades Microsoft will figure that out..

  29. Here’s a thought if shrinking database files is such a bad idea, then microsoft should get rid of the shrinkfile command
    There are best practices and then there is real world.


    • Kim – that’s like saying since people keep sniffing glue, we should get rid of glue. Glue’s a good tool if you use it for the right things, just like shrinking files is.

  30. I respectfully disagree.
    Valid scenarios exist where it is desirable to reclaim disk space, perhaps when a database has been purged of a large amount of data then it is silly to still use all that expensive enterprize storage under a empty database file.

    • Juel – absolutely, if it’s a one-time thing like purging data once, that makes sense. But if it’s an annual purge (say a data warehouse that only keeps 5 years of data, and you delete the 6th once per year) then why shrink it every year – when you know the data will just grow back?

  31. how about databeses stored on a big SAN? fragmentation is not a real problem…am i right?

  32. Ok reason i say AM I RIGHT is becasue i am not sure…but…as far as i know on a SAN data is spread all over disks and we don’t really care where the data blocks are. Sometime we find blocks of datafiles and blocks of logfiles in the same spindle. Brent…this is all i know about data on SANs.

  33. Hi Brent… I agree the shrinkfile is abused and inappropriately recommended.

    I am currently resizing my log files to reduce internal log fragmentation. During the process I do use shrinkfile to “purge” the fragmented VLFs and then grow the file back to an appropriate size. Would you consider this an appropriate use? If not are there other options to this sort of maintenance?

    • Nick – yep, that’s a great one-time use for shrinking. Of course, you don’t want to regularly automate this – if you find yourself doing it repeatedly, size the logs correctly from the start, and leave ‘em there. Hope that helps!

  34. Dear Mr Brent,
    When I shrink my database, my computer run out of memory and SQL shrink error, after that, I shrink my DB again and it run success. But after that, when my application connect to this DB, it run too slow, and my DB grown very fast.
    Can I rebuild index on this DB?

    I have another DB and I compress both table and index on this DB. So, can I rebuild index on this DB? Is my DB grown too big again after I rebuild index?


  35. If you are strictly talking production, I agree. However, we have roughly six non-production replicas of production and there is a shit-ton of logging data that accounts for about 80% of the size of the database. Therefore I do find shrinking a fact of life in order to conserve some space and perhaps improve the performance of those environments on lesser hardware.

  36. I completely agree, I am not a fan of auto shrink. That being said, I’m currently shrinking an entire database right now. The situation is that the database is 1.9TB and when converting from 2005 to 2008 we are using page compression for the majority of the database. We will shrink after the database has been compressed and then we will reindex everything that we break in the same stent. This is a situation I find shrink useful for. overall the biggest fight I’ve had is developer pushed shrinks and then complaints about indexes (unique situation). This has been resolved and the healing can begin.

  37. Hello, I’m new to Sql Server and am very intrigued by all I’ve read here. I’m building an app that uses Sql Server Compact 3.5 as its db engine. Because of the 4GB size limitation, I had planned to periodically purge older data, followed by the shrink operation described here:

    Excuse my ignorance, but does everything I’ve read here (particularly the index fragmentation effect of shrinking) also apply to SqlServerCE? If so, since there is no ‘defrag’ method available for SqlServerCE, what practical options exist to manage the database size?

    • Vic – hmmm, I’ll be honest – I haven’t done any SQL CE work in almost a decade. Is there a reason you’re not using Express Edition instead?

      • Thanks for your reply, Brent.
        Our application is designed to interface with and collect data from an external USB device and may be run on mobile platforms. Because off this,Sql CE seemed approriate, being lightweight and supporting a simple private deployment. I mentioned the 4GB DB limit, but in reality we’re looking to keep it well under 200MB by purging old data as it becomes stale.

        Since Sql CE is a file-based DB that doesn’t support clustered indexes, I’m in doubt as to whether the above discussion really applies here. At the same time, performance degradation is a top concern. Hoping maybe Paul or others could shed some light?

  38. I just had a visit from a supplier that created three user databases on my server. It’s really just small change, each db is hardly a GB, but because of wrong settings – that I should have kept an eye on, but didn’t – the log blew up to almost 1 GB(only 14 MB used). Three times the size of the datafile. I do have enough room on the disk for the transaction log (disk is 3 GB), but I like to keep my databases properly sized.

    I’ll probably leave it this way, but would this be a situation where you could use shrinking? Or is it still better to setup new datafiles that are properly sized and copy the data? Btw, I never shrink databases, and I do have weekly maintenance tasks to rebuild and/or reorganize indexes for all user databases.

  39. I have an interesting situation regarding shrinking a data file. I ran into a situation where we hit the maximum value on our primary key field, which was an auto-incrementing value of the int datatype. To address the issue, I created a second table with the same basic definition except now the primary key field is a bigint. I then issued an insert into … select command to move the data from old table to new table, using with tablock to avoid growing the transaction log (the database is in simple recovery mode). Once I got the data in the new table, I was able to drop the old table. the size of the old table was 37GB. So, in the process of temporarily duplicating the old table, I added an extra 37GB to the data file. Once the new table was in production, I dropped the old table but this, of course, did not release the 37GB back to the OS. While I realize the bigger mdf file doesn’t affect backups I would like to release the 37GB back to the OS, as I don’t necessarily have an infinite amount of space on disk and I have other databases using that disk that are also growing.

    What’s the best practice here?

    • Dennis – obviously the best practice would have been to avoid the problem initially, but it’s too late for that now. ;-) When repairing something, we sometimes have to make a one time exception.

  40. Pingback: Stop Shrinking Your Database Files. Seriously. Now. | Brent Ozar Unlimited | TechnoSwimmer | Jesus Christ | Loong | IT

  41. Pingback: Reduce the size of the SharePoint content database without shrinking it. | Sharepointkaos

  42. Pingback: Building The Redneck Treadmill Desk - RealSQLGuy

  43. Pingback: Building The Redneck Treadmill Desk - SQL Server - SQL Server - Toad World

  44. In this article it states shrinking database will resolved index fragmentation which was covered today in webex seminar
    In this article I beleive iam getting a different message.

    Is it the Version of SQL that makes the differnce or am I missing this all together?

  45. Pingback: Sometimes It’s A Dirty Job – #TSQL2sDAY #50

  46. HI Brent,

    I have a large database that has been ignored for a long time and one log table has grown to consume 95% of the space used by the datafile which is 120GB. After asking I have found that we only need to keep the data for 3 months. Removing the old data has given me almost 100GB of free space. I have implemented a process in my maintenance plan to trim this log file every week.

    Looking at my backups I see they are only 10GB (Compressed backups). Would I be correct in assuming the cleanest and quickest (least amount of downtime) course of action would be to drop the database and then restore the backup file and resize appropriately?

    • James – this is kinda beyond the scope of something I can answer quickly in a blog post comment, but no, restoring from backup doesn’t sound like a good idea. It sounds like you’ve got a few different problems in there (data file too big, log file too big, possible internal & external fragmentation, etc) and I can’t really do justice to that in a blog comment.

      You’re better off posting this in a QA forum like or with as much specifics as I can, and the problem that you’re trying to solve. Thanks!

    • I often restore production into non-production environments for our developers, upon which I truncate a very large log table as well. A shrink on the data file should do the trick just fine for you – if you always have a rolling 3-month volume of logging data, you should not have to do this more than the first time since the size of that table should be relatively static. I would personally do the purging in smaller, more often chunks, however – that will ensure you don’t bloat your transaction log during a very short window as probably happens now.

      Note: if you have Enterprise, you might look into getting an estimate on compression to further reduce the size of your table (indexes) and database – but that’s a different topic and has some caveats.

  47. “I have implemented a process in my maintenance plan to trim this log file every week.”

    By log file I mean log table. I do not shrink my log files.

  48. Ok, so I now know that I should avoid shrinking databases and files if at all possible. It’s not possible. I have a SQL server supporting a SharePoint environment and we’ve implemented BLOB Storage. Upon updating our EBS provider software we had to import over 1TB of data into SQL and then externalize it back to disk. So I am trying to free up over 1TB of disk so I can shrink my Windows disk volumes and return the disk space to our VMWare admins.
    So what is the best procedure to safeguard against the potential ills and enable a server admin to reclaim a bunch of disk space?

    • Tom – in that case, I’d check SharePoint’s best practices since that’s really more of a SharePoint question. Generally speaking, though, you can’t shrink Windows disk volumes online, so you’re talking about an ugly outage.

  49. Pingback: Tipps & Tricks: Datenbanken richtig verkleinern | Robert Panther on SQL Server

  50. Pingback: T-SQL Tuesday #52 – Arguments | Voice of the DBA

  51. Pingback: Reducing database size for your development environments |

  52. Hello Brent,

    I am supposed to restore production databases onto development servers. Those dev servers have about 30% of the diskspace of their counterparts in production. The only way I’ve found to restore the production databases, is to purge everything older than three months and shrink the database files. That of course takes forever and a day. Any suggestions on how to avoid shrinkfile and still restore just three months worth of data? Keep in mind, that there are over 10 instances and about 7 databases per instance weighing from anywhere to 50GB up to 1TB.


    • I do a lot of production restores into non-production environments too Craig – the options are rather limited. I end up purging out some unnecessary data after the restore, then shrink it down to reclaim some space. There’s really no way to reclaim the space unless you want to pipe out the data into a fresh database (minus what you would otherwise purge) but that could be quite cumbersome. I also use compression in my development environment – that gets me a TON of space back (reclaimed 1.5TB last weekend alone compressing some AX databases).

    • Craig – buy more database space. Seriously, space isn’t that expensive these days. (And if you’re using enterprise storage, then it’s time to get into SAN snapshots, which do this whole process with limited to no additional space.)

      • I’ll put you in touch with my SAN Admin, Brent – and he’ll likely invite you to a subsequent board meeting ;-) He’s doling out space ALL the time, so when I CAN, I try not to bother him. I see absolutely no reason to not remove some data after the restore and shrink to preserve space. After all, I only restore our Dev/QA/Staging environments once every few months.

  53. Hello Allen, thanks for the suggestion. It just seems odd that MSSQL requires you to restore empty space. When restoring databases in MySQL and PostgreSQL, the indexes and heaps are reorganized and empty space can be reclaimed. Perhaps the shrink would be faster on the target server after restore?

  54. I restore from backup (on the target) and do the shrink on the target, as to not impact production. I then RDP to the target and kick off the shrink – that way my laptop doesn’t come into play. I do enjoy going home after work (with my laptop) and not watching a shrink run overnight ;-)

    My shrink on development resources takes maybe 10-12 hours and that’s a 500GB database roughly.

    It is a shame that the empty space isn’t somehow cleared out during the restore – it’s something I tried various ways to overcome, but had no such luck. Restore/shrink it is.

  55. “Craig – buy more database space. Seriously, space isn’t that expensive these days.”

    You’d think SAN space/disk was some sort of Unobtainium when dissucussing it with Storage/SAN Admins. ;o)

    • I agree Scott – we just got a new EMC last year and I’ve already been given 4TB of disk for my development server (quite a bit more than the 500GB I had when hired three years ago). Unforeseen acquisitions and projects have gobbled that right up (we went from three 500GB non-prod AX environments to about 12 now) so we have to do what we can as DBA’s to make the most of our resources. I use compression and shrinking as my primary method to make the most of what I have at this time – in non-production, of course. Never in production.

  56. Pingback: Database Maintenance Plans: A Primer - SQL Server - SQL Server - Toad World

  57. I would agree that in normal everyday operations, you don’t need to shrink. But not all operations are everyday or normal. A lot of houses move their data files around without checking for things like consistency. Clone machines and move databases over. The log file connected with a database may no longer be relevant to the database’s current state. The inactive area might be related to bulk transactions that are no longer relevant to that database.

    And consider: If there was NEVER a reason to do it, why is it available?

    Are we supposed to engage in that other “Don’t Ever Do It” behavior, switching recovery modes?

    • Mike – you can also buy unfiltered cigarettes. They must be a good idea, right? If there was never a reason to do them, why would they be available?

      Keep shrinkin’ them databases. I hear it’s great for your T-Zone.

    • Mike – I think I understand what you are trying to say.

      To add to Brent’s great analogy, just because something is available – doesn’t mean it’s a good idea for our production databases (auto-shrink, auto-update stats off, DBCC DROPCLEANBUFFERS, etc.). These are tools we have for very particular situations. Shrinking a log file might be appropriate for a development environment, where the 80GB trans log is overkill (especially when you have dozens of such non-production environments). I also think it’s appropriate for “abnormal” situations, like a one-time data load/archival, for example (you alluded to that, so I concur). Consistency checks and/or index maintenance are by no means “abnormal”. But under no circumstance would I condone a routine shrink of my production trans logs.

      • Yes, that was the point I was trying to get across. If you’re looking for disk space, dumb move. If you’re moving the database to a new instance, anyway, or have requirements that fit the profile, only an absolutist would sneer at the options. Or somebody who’s never actually worked with the nuts and bolts of recreating environments on an everyday basis. Oh, and by the way, I too advocate for people just buying the disk space. But if you’ve ever worked in government, you’d know it’s not cheap everywhere. We get charged by the Gigabyte, both for the virtual machine and disk space allotted.

  58. Brent,

    I’ve been creating a process to go through all our databases and analyze the log structure, backup, shrink, and resize appropriately taking operational requirements and VLF sizing/amount into account. My code is working beautifully, but I encountered something in testing today I wasn’t expecting.

    I have a good sized DB with a 104GB log and around 900 VLFs. I backed up the log and shrank down to next to nothing during a period of low activity. I then began to grow it back out to it’s original size in 8GB iterations, keeping VLFs around 512MB. During regrowth, I was curious about DB availability and started running random SELECTS against different DBs. Results returned without issue. I then created a table and looped a bunch of inserts.

    I was surprised when the log continued to grow and the table creation and inserts completed without issue or delay. I was under the impression the DB would somehow be locked down, blocking DDL and DML, until regrowth completed. Of course, considering the circular nature of the log, I can see how this would be a false assumption. But I can’t find anything on the web one way or the other.

    Could you possibly shed some light or point me int he right direction on what is going on? Thanks!

    • Steve – this is beyond the scope of something I can answer fast in a comment. Sounds like a great question to ask on Thanks!

    • As Brent alluded to, that’s not a small topic, but if you haven’t yet – Kimberly Tripp’s post on the topic is one the the key posts I’ve seen on the topic.

      I personally have not had contention issues re-sizing my trans log when I’ve implemented the aforementioned advice.

    • @Allen, I’ve read Kimberly’s blog several times which is what sent me on my quest to make all our logs health and happy. I’m just trying to find out what happens during the growth process with regards to DB availability to end users. I have a question on Stack Exchange as Brent suggested but I’m not getting answers there either, only more questions about how I can prove what I observed. While the questions may be legitimate, they don’t really help me answer my question – Is a SQL Server Database able to be written to during log growth?

      • Steve – you can set up an experiment to prove that really easily. Start a database with a 1mb data file and 1mb log file. Start one window growing the log by, say, 50GB. In another window, try to do inserts/updates/deletes in that same database.

        • Brent, that is exactly what I did monitoring the log and time stamping everything. I grew the log out in roughly 8GB chunks and in between each growth iteration, wrote to the same test table I was doing inserts on. That way I could see the time stamps and any delays. Each 8Gb chunk took roughly 20 seconds to zero out.

          What I saw was log growth did block writes…for like 46 ms! You can see the lock occur and then the writes started right back up again before the new portion of the log was initialized. That part was very interesting to see – the writes occurring in the active portion of the log while the new inactive portion was being zeroed out! Definitely not what I expected.

  59. Pingback: clear SQL Server transaction log | Coding and Programing

  60. Pingback: How to: How do you clear the SQL Server transaction log? | SevenNet

  61. Pingback: Fixed How do you clear the SQL Server transaction log? #dev #it #asnwer | Good Answer

  62. Brent,

    I have an issue and have read all of the competing views on the subject and am hoping you can provide your take on the problem.

    Earlier this year I inherited a large database (500+ GB) of which 350GB are base64 encoded binary files (pdf’s, images, word docs, etc…) stored in of all things nvarchar(max) fields.

    The rearchitecture of the entire platform has me moving the software to Azure and I will be leveraging encrypted table and blob storage for that 350GB worth of files.

    Part of my migration strategy is that after the files are moved off to Azure and all of the software is leveraging the web service to store/retireve them, was to wipe out those columns in the 3 tables and then shrink the database, reclaiming a significant amount of disk space that will make the rest of the project much smoother (database migrations, portability, backups, etc…)

    However I have read all of the pro’s and con’s and am not sure the best way to tackle this. I cannot make a clean port of this (too much downtime) if I do not get some or most of the space back but downtime in the production environment must be kept to a minimum.

    Is there anything short of using shrink to retrieve the space and give me the flexibility I need?



    • Hi JD,

      I’m not Brent, but I was in the comments and this caught my eye.

      I actually don’t have any problem with one-time shrinks of a database if you have a lot of space to reclaim, but your situation is a little special. Dropping a column is a metadata only activity, but reclaiming the space is a bit more complex. You can get an idea of what’s involved by reading the Books Online Page for DBCC CLEANTABLE:

      In your situation it might be faster and simpler to just create new tables and do a minimally logged insert into those new tables of just the columns you want, and then do a final backup and drop the other tables. (Or insert into a new database.) I just mean this from a speed/efficiency reason. The whole process of DBCC CLEANTABLE, then shrinking the files, then rebuilding indexes can just be painful and slow.

      Test it all out and just see which process works better for you.


      • Kendra,

        Thank you for the thorough reply. Let me first by saying that while I have spent many years designing large scale software systems, the inner workings of true database administration are still not in my expert toolkit :).

        I have just spent some time reading up on DBCC CLEANTABLE and the part that is still not clear to me is whether or not it will simply free up space that can be reused OR it will actually shrink the size on disk. I suspect not but I wanted to confirm. One of my biggest concerns is the size of the database currently. Not from a cost perspective but in the difficulty of working with it, from the backups (using RedGate today) to the multiple development environments and if I can get the database down to the size it should be, even padding it significantly will make the rest of the project much more manageable.



  63. What do you think about suggestion #8 in the following URL? Is ays to reduce internal log fragmentation (1) do a manual checkpoint if Simple recovery (2) do a ShrinkFile TRUNCATEONLY (3) ALTER TABLE giving the log file a specific size.

    • Hi Brenda, That post is about doing a one time operation to resize the transaction log– they aren’t advocating doing regular shrinks of your files.

      • Sorry, I was not clear. It says that the three steps will reduce internal log fragmentation. Do you think it would reduce fragmentation?

        I am one of those shops that does a DBCC ShrinkFile(file, 1) for the data and log every night. Looking to stop that – obviously. But wondering what is the next step to then defrag the data,log,indexes for more efficient performance moving forward.

  64. OK, so this article and nearly every other article I can find says the same thing over and over, “don’t shrink, don’t shrink, don’t shrink”. Obvious next question is, why not? Everyone seems to agree that shrinking a database results in a heavily fragmented index, which is bad. So my, currently unanswered, question is this, “Is it still abhorrent to shrink a database if you follow the shrink with a defrag of the index?”. It is fairly trivial to write a SQL job to defrag an index, as far as I’ve seen anyway, so instead of telling everyone that shrinking a database is the absolute worst possible action you could ever do in SQL, wouldn’t it be more informative to say that yes shrinking is bad for indexes but if you defrag the index after the shrink then you avoid the world on fire scenario which results from shrinking without defragmenting?

    • BH – let’s take a step back and ask, “What are you doing that requires the shrink/defrag activities?”

    • To add to Brent’s question – if you are shrinking after doing maintenance every time, then why shrink to begin with, only to see it grow back to where it was? Just right-size it to support your operations (including maintenance – keeping VLF’s in mind if you so desire) and leave it be.

  65. One operation that I had to do and that requires a shrink was removing a ndf file from a database. When our dev environment was created, every user database had 1 mdf and 7 ndfs. A decision was made to remove one ndf file so that each of the 7 database files could be spread across 7 disk volumes. Removing the “now extra” database file, requires a shrink to empty the file.

  66. I think the misconception is that the shrink operation is sooo bad IT SHOULD NEVER BE DONE. Otherwise, to quote Dr. Venkman, “Human sacrifice, dogs and cats living together… mass hysteria!” Ok, not really, just understand what you are doing and why. There will be the exceptions. But shrinking is the result of something else not set up correctly – don’t just treat the symptoms by coding around them.

  67. I need to perform massive delete of rows on a table. I need to reclaim those free space. what’s the best way to do? Shrink Database or Shrink file (s)? I have one .mdf and three .ndf files.

    • If you must shrink, shrink file. It gives you more granular control.

    • Be careful on how many rows you delete in one shot. I had a developer “DELETE * FROM ” not along ago (versus TRUNCATE – doh!). That resulted in a 150GB transaction log and since the process was blocking/would not complete, I had to kill his SPID. Then what? Rollback time… and the rollback would not budge – it was stuck at 0%. So I had to restart the entire instance, which got the rollback rolling – no pun intended. Four hours later the database finally recovered.

      So, when I hear “delete massive rows in table” – it makes me nervous. Be sure to do them in batches.

      • what’s the best way to do it? would you mind writing a script here?

        • Dan, you want to identify something unique in the record and then delete a respectable range – 1 – 10000 or something similar. You can create a loop or use a -gasp! – cursor to iterate through each batch. The trick is to commit each batch so the log can be reused.

          Of course, If you’re going to be deleting the majority or all of the records, consider using TRUNCATE. That will reseed your IDENTITY to it’s starting point (if you have one) so be careful of that.

          As for a script…you should be able to figure it out based on the info here.

  68. So if you have a 50g log file that’s using < 1% of the space, created because someone ran a huge transaction that has since committed, and the logs have been backed up – you are saying that you should not shrink that file? I respectfully disagree.

    I would say the same thing if a large one-time delete was done and a database file now has a huge amount of unused space and you know that database is never going to grow back to the original size again.

    In both cases that space might be needed by another file in the future, and it wont be available.

    Am I wrong?

    • Chuck – I think anomalies lend to special cases where you want to first identify the root cause, mitigate it best you can, and right-size your log based on historical metrics. I’m pretty sure we all do this under such conditions, but I can only speak for myself. When right-sizing, keep in mind the growth rates and be cognizant of your VLF structure/count within the transaction log. Kimberly Tripp has some great articles on how to best shrink and subsequently grow your log out to a good size (Google it).

      As for one-time deletes, when you need to do so, you should do it in iterations so your transaction log doesn’t bloat – not to mention such operations in a single statement are not kind when it comes to concurrency.

    • Chuck it’s impossible to say you are right or wrong – I don’t know your business, your infrastructure, or your resource availability. But so long as you can guarantee no one will run a similar transaction in the future, or that the space created in the data file do to a purge won’t be needed in the next 6 months, then go ahead.

      Again, the point of this article is to address the automated or regular shrinking of files. It creates a lot of unnecessary maintenance overhead that is not great for your databases. If you reallllllly need that 50GB back, by all means reclaim. Just be aware of what you are doing, why you are doing, and how you are preventing the requirement to shrink again.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>