Stop Shrinking Your Database Files. Seriously. Now.

I had sworn to myself that if I saw one more helpful article about how to shrink your SQL Server database files with DBCC SHRINKFILE or how to back up your log with TRUNCATE_ONLY, I was going to write a rant about it.

Epic Advice Fail
Epic Advice Fail

SQL Server Magazine just tweeted about their latest article, a reader-submitted solution on how to shrink your database files with ease.

AAAAAAAAAAARGH.

To make matters worse, this particular article is by a Microsoft DBA and doesn’t include a single word about the problems involved with shrinking your database files.

AAAAAAAAAAARGH.

Don’t shrink your database files just to free up drive space.  Stop.  It’s an unbelievably, disgustingly, repulsively bad idea.  Your disk drive space is for files, not for ornamentation.  You don’t get bonused based on the amount of free space on your drives.  Empty files don’t take longer to back up.  And so help me, if you find yourself shrinking databases so often that you have to automate it, you need to cut up your DBA card and reconsider your choice of career.

I’m not going to reinvent the wheel by telling you why.  Instead, I’m going to point to half a dozen posts explaining why this advice is just a flat out epic fail:

I feel bad going nuclear on this article, but I’m not just venting about the author.  This kind of advice shouldn’t clear any kind of SQL Server editorial team either.

Coming next month: “How to Reduce Your Backup Times with the Truncate Table Command!”

If you still think you want to shrink a database, check out how to shrink a database in 4 easy steps.

Learn More About Why Your SQL Server is Slow

sp_Blitz®: Free SQL Server Health Check – You’ve inherited a SQL Server from somebody, and you have no idea why it’s slow. sp_Blitz® gives you a prioritized list of health and performance issues, plus gives you URLs for more details about each issue.

Our Free 6-Month DBA Training Plan – Every Wednesday, you get an email with our favorite free SQL Server training resources. We start at backups and work our way up to performance tuning.

SQL Critical Care® – Don’t have time to learn the hard way? We’re here to help with our quick, easy day process that gets to the root cause of your database health and performance pains. Contact us for a free 30-minute sales consultation.

Previous Post
We’re Moving to Chicago
Next Post
How to BACKUP LOG WITH TRUNCATE_ONLY in SQL Server 2008, R2, 2012, 2014, 2016, 2017, 2019

300 Comments. Leave new

  • You forgot this one, where I complained about the exact same thing you do: people who show people how to shrink their files but conveniently forget to tell them why it’s bad and why it should be reserved for emergencies only:

    Oh, the horror! Please stop telling people they should rhink their log files!

    http://sqlblog.com/blogs/aaron_bertrand/archive/2009/07/27/oh-the-horror-please-stop-telling-people-they-should-shrink-their-log-files.aspx

    Reply
    • There is nothing wrong with a shrink provided you know what it does and mitigate the effects. If you shrink – rebuild the indexes afterwards. Fragmentation problem solved. This isn’t rocket science…
      People who say don’t shrink because it fragments the table are really making hay not of nothing. It’s not a big deal.

      Reply
      • Zack – so if you shrink the database and remove all free space, and then you rebuild the indexes afterwards, do you understand that that re-grows the data file right out to make space for the newly created index?

        Reply
        • I agree with Zack. Say you’ve got a file that’s incredibly oversized and has 100 gb of free space. You shrink it and rebuild indexes which causes it to grow by another 5 gb. Who cares? I just reclaimed 95 gb and possibly prevented a wake-up call in the middle of the night due to low free space on the disk.

          Reply
          • Chuck – I’d suggest looking at the root cause of why the file was oversized. Perhaps there’s a staging process, for example, that regularly loads and transforms large amounts of data. Temporarily freeing up the space won’t fix it – and in fact, can make things worse if you think the space problem has been fixed when it’s not.

          • If you know your application, and know that the file will never grow anywhere near that large again, it doesn’t make any difference why it was that large in the first place. The point is it’s oversized, wasting space, and that’s a bad thing.

          • Chuck – if you knew your application and controlled the database, then it wouldn’t get 100GB too large in the first place. ZING.

          • in a perfect world every data file would consume every spare % of disk and usage would be monitored completely by DBA’s but this is rarely the case, at least in my experience. mgmt want failsafes, they want that 20% and they want the infra guys to watch that and obviously the dba’s to capacity plan up to this point.

            i spent the first 3 years of my DBA career with this “never ever shrink” philosophy and if i hadn’t had it drilled into my head so much i would have perhaps had a rethink – i think that it’s a little too much of a catchall phrase amongst the DBA community. my two cents

          • Agree completely Darrell. I think this is one of those rules that is a good rule of thumb, but has exceptions.

          • I am seeing discussion of the risks of a SHRINKDATABASE, and I see recent comments. Has there been developments in SS2014 that mitigate the risks of a shrink? I have a database that is being split in two pieces, and the original is 500Gb. I will have to prune some old pieces out, and move a lot to another. It seems to me that is a legitimate reason for a shrink.

          • I meant, physical disks, so you CAN’T expand them – my bad.

          • Thanks Brent … but I didn’t get why Zack and Chuck suggestion of shrinking then rebuilding is wrong, what’s wrong with that?
            And continuing on Chunk example, isn’t better to return 95GB by shrinking then using 5GB for rebuild, than loosing a whole 100GB forever?
            If SQL Server is going to write on those 100GB at any time in future, then I can understand you, but if it’s not, then would you please explain more the reason for you disagree … thanks again 🙂

        • How to reduce the transaction log file size without shrinking in sql server?

          Reply
          • Allan – why do you need to reduce the transaction log size? How big is the database, how big is the log file, and what’s the size of the largest table in the database?

          • The database size is 250GB and the Log file is 60GB. The largest table size is 8GB.

          • Allan – I’d leave it. It’s not unusual for a log file to be 20-25% the size of the database.

          • But I don’t have much disk space left because of that. I need the space in that disk. I have a disk space issue on that particular server and there is not much I can do about that. I need to free at least 30GB in that specific drive. Any thoughts?

          • Get more disk space. Disk space is not that expensive, and if you have even just one out-of-control transaction on that server, you’ll fill up that 60GB anyway.

          • I think you have bigger fish to fry if you are tight on disk. I’m assuming these are physical disks, so you can expand them? If the trans log grew to that size, it will likely do so again. Perhaps it did so due to maintenance, etc.

            As an aside, being 60GB you might be well suited to check the VLF count to see how fragmented the trans log is. That can have all sorts of adverse impacts (due to the default growth rates, which suck). I have had 60GB trans logs with over 1,000 VLF’s; after properly adjusting them (8GB VLF’s) and reducing their size to, say, 32GB/40GB – they seemed to stabilize. Google Kimberly Tripps article on the topic of VLF’s.

          • Brent : “Allan – I’d leave it. It’s not unusual for a log file to be 20-25% the size of the database.”

            And if the log file is 110% of the database?

          • Then you officially suck as a database administrator. Turn in your DBA card, and report to the nearest authorities.

            No, seriously, that’s the time where you fix the root cause first, and then shrink the log file once – and exactly once.

      • Steve Mangiameli
        June 1, 2015 10:25 am

        Zack the point of this entire post is that database shrinkage should be the exception – the rare exception – and not something you are doing frequently or automating. If you are shrinking database files frequently, then you have bigger problems than your shrink operations or your applications truly need the space and you need to get the bean counters to release a few so you can grow out your disk accordingly. You are literally wasting time cleaning up the symptoms rather than tackling the problem.

        It would be like constantly bailing out a toilet tank because it’s always overflowing, completely ignoring the fact that the fill valve is shot and all you need to do is just fix it or get a bigger tank because you actually need the extra water.

        You have better bigger more important things to do than reclaim space.

        Reply
        • Simon Holzman
          August 2, 2015 3:01 am

          So, I have got the scenario in which I need to find a way to automate SHRINKFILE, or at least, to automate the effect.

          I have a 1.2TB database occupying roughly 1.5TB of diskspace on Production across 8 Data and 8 Index files. I also have three non-Production databases; Training, Development and Test which reside together on a separate DB Server.

          We copy from Production to all three other databases roughly once every two months to limit drift and allow us to test upgrades. The Test database is left functionally unchanged but the other two databases have a subset of their data “anonymized” and have most of the rest deleted using an automated script I wrote that copies the data to be kept to a temporary table, truncates the original, and then copies the data back again.

          The problem is that we have limited disk space on that server (roughly enough for 2.5 copies of Production) and we cannot get more.

          Therefore, we restore the Test and Training databases and then I run my anonymizer script against Training. Finally, I have to shrink the 8 data files and 8 index files assigned to the Training database so that there is space for the Development database. The index files are each going from 70GB to 12GB and take roughly 30 minutes and the data files are each going from 148GB to 24GB (roughly 70-80% filled) and take something over 6 hours each (I have no idea why it takes so much longer). This is sucking up my weekends and I am not even trying to fix any fragmentation caused since these are very underused normally.

          Is there a way to automate this ? Perhaps moving the data and indices to new data and index files that are the right size in the first place.

          I had thought that doing a backup and restoring the backup would work but our hosting service says that the backup is storing the physical layout of the data and will recreate that verbatim.

          However, I found a bug a little while ago when a sub-query was pulling a single record but the Production and Test versions were pulling different records. I worked out that the Test version had to be storing the data in a different physical order on the disk from Production. (FWIW, I fixed the bug by building an index on the column being returned since the query plan automatically used the Index rather than the heap and so this imposed a consistent sort order – we had no control over the application code and so I couldn’t just add a ORDER BY clause).

          This seems to suggest that the data and/or indexes may actually be restored into logical order although presumably allocating the full specified size. If this were true, the DBCC SHRINKFILE(xxx, 12000, TRUNCATEONLY) command would change the allocated size, but probably in under 5 minutes.

          Unfortunately, I cannot test this at the moment and, if this will not resolve the problem, I’ll need to work out how to add another filegroup, move the remaining data from the old one to the new one and then remove the original filegroup, which will hopefully be faster than the current SHRINKFILE I am running .

          Any thoughts (other than tearing up my DBA Card) ?

          Reply
          • Simon – if you can’t test things, you’ve got a bigger problem. You need space for development and testing. It’s time to have an honest talk with management about the space and capacity you need in order to do your job.

          • Simon Holzman
            August 3, 2015 6:49 pm

            I couldn’t test the backup/restore when I wrote it because I was waiting for the first (of the 8) Data Files to shrink from 148GB down to 24GB… it finally finished a couple of hours later and I tried doing the backup/restore.

            Sure enough, a nice small backup file but, sadly, sure enough the restore not only still had the full disk space allocated but the next SHRINK took just as long as the previous one.

            I think the only choice is to copy just the data that I want to keep from Production (or, more likely, the Report Server, which is a straight copy of Production each day and is very underused) to a secondary database on the same Server and then I will be able to back that up and restore it as I like. Even on Production, doing this with copious (nolock) hints outside production hours should be fast enough to let me get some sleep on weekends !

            Thanks,

          • Production databases should have large tempdb’s across multiple drives, and usually have extra room for accordion tables, and logs which are never pruned.
            So your first copy of production should be restored on a server with enough disk space so you can truncate, shrink, remove the extra files from your tables, tempdb and logs.
            Then make new backups for your lower tiers which need a lot less space and will run much faster on restore…

          • NOTE: you cannot regain your disk space without doing at least one of the following:
            1. DBCC SHRINKFILE (N’log file name’, EMPTYFILE);
            2. ALTER DATABASE [database name] REMOVE FILE [log file name];
            3. Perform LOG backup to reset your pointers

      • EXACTLY! In my case, while developing I changed a process that uploads images to our database. In doing so I removed the line of code that compressed the image. Images went from a couple hundred KBytes each, to over several MB each. This caused, over about 2 months, the database to BALLOON in size. I wrote a procedure to compress the images, which left the database with more than 60% free space, and the drive on the server nearly full. We’re talking about a process that took up, in 2 months, as much data as the database had in over 3 years. Why WOULDN’T I shrink it to get that space back, when it’s going to take years for it to grow to the new size the file is after my screwup?

        In all my research on this, all these people are screaming that it should NEVER be done, when that’s a complete steaming pile of BS.

        The RIGHT WAY to educate people about something like this, is NOT to tell them not to do it, but to tell them WHY they shouldn’t, and how to make it right if they do.

        This article makes me more angry than the article author is.

        Reply
    • How to reduce the transaction log file size without shrinking in sql server?

      Reply
    • Don’t shoot me. I have deleted 1000s of tables, unused indexes and about 80% of the data in the remaining tables. Therefore I am going to shrink the database down before I archive it to tape and cloud.

      Reply
    • I just updated a database server from SQL 2008 sp3 CU something to CU17. The reason, because the CU had a bug with not deleting Ghost records. Now, after the Update I have 170 GB of fra data in the database. Is that a good reason to shink the database?

      Reply
    • Michael Spurlock
      January 11, 2017 11:17 am

      So if you delete or redact massive amounts of data from a column in your lower environments, you’re just supposed to be happy that the inefficient SQL Server design continues to claim that disk space?

      When is Microsoft going to address this dog-ass design?

      “Drive space is cheap” does NOT address budgetary issues or government regulations.

      Reply
      • If by “lower” you mean Dev, QA, Stage, etc. – those are exceptions. I have Debug and Learning environments as well as others, and after truncating useless data after a restore from production – shrinking is just fine in those cases.

        Reply
      • No – you’re not supposed to be happy about it – not one is happy about it – but you ARE able to shrink your database files – no one is actively stopping you. However, the point of this article and other ones telling you why it is BAD is in good faith – it does pose problems! I’m with you though – I shrink database files because it works for me in SOME VERY specific situations – but in the majority of the situations, it doesn’t matter/make sense/etc and there are better ways to address the issues. If you speak to the exact situation, I’m sure someone would either agree with you and tell you to shrink OR tell you a better way to attack the situation. I came to the company I’m at in which a product used to be managed by the vendor, so to speak. Once I become the DBA, I investigated this product because it had a table with 10s of millions of rows and had started slowing things down – well, it was a log table with useless data for historical purposes. I backed it up, and truncated the whole thing. Now, I have a database that is HUGE in size, but the actual data sitting in it now is like, 20GB. So, I shrunk the whole thing down to 50GB and have a job that truncates that data on a monthly basis and I never have to shrink again. I’m not as experienced as many people here, but in my situation – this worked for me. ONE time, and never again.

        Reply
        • Same here Tim – I had a production phone system database (I3) that had a bunch of data from a previous upgrade, which changed the SiteID and rendered the old data useless. To top it off, the data purge job in the program only did the current SiteIDs data, leaving a ton of – again – useless data in the database. We run many of our instances on SSDs and they aren’t cheap, so I did a one-time reclaim of the space after cleaning things up.

          Situations dictate.

          Reply
      • That’s my point, when you inherit a FULL mess which has no inherit for more storage, you have no choice. If it hurts performance, to bad. The choice is stay running and shrink, or don’t shrink and run out of space. Assuming that there is always going to be space, is not realistic in all cases.

        Reply
  • Hey it could be worse. The article could be telling people how to free up space by deleting the transaction log.

    Reply
  • Sigh. That’s all I can say. Time to make my new moniker a reality, you all know my ‘involuntary DBA’ – now I give you ‘incompetent DBA’. Sigh.

    Reply
  • Unbelievable that editorial control wasn’t exercised there.

    Reply
  • These articles make me shrink in terror! Maybe in the next edition Microsoft can deprecate the shrinkfile capability.

    Reply
  • Dude – I couldn’t even deprecate auto-shrink, and I used to own the Storage Engine before I left MS. Don’t hold your breath for that one.

    Reply
  • The real problem is not the *shrink itself*, but HOW the SQL shrink command works (move the very last single page to the very first available open slot, repeat), causing table and index fragmentation.

    I still think someone needs to write a “smart shrink” utility that attempts to free space from the end of a SQL data file while *keeping it in the proper page order*, thus avoiding fragmentation (as much as possible). Not sure if this could be a third-party utility to have access to that level of internal data file detail, or if it would have to come from the MS SQL team.

    Sure, still might have some potential for mis-use, or over-use, but at least if we do run into that occasional unexpected disk space crunch, we’d have an option that doesn’t obliterate our performance. (I know, proper planning, blah blah blah. Sometimes stuff happens.)

    Reply
    • Brad – no, that doesn’t completely solve the problem either. When you shrink a file, it’s usually because the database NEEDED that space at some point. When the database needs it again, you incur a serious performance penalty while the OS reallocates the disk space. Instant File Initialization mitigates that for data files when it’s set up correctly, but not for log files. Even with IFI, the file’s newly grown portion is rarely contiguous with the rest of the file. Presto, fragmentation. Combine this with SQL Server’s default growth settings of 1mb for data files, and you’ve got a fragmentation disaster.

      Since there’s no way to fix fragmented data files while the database is online (only with backup/restore or disk defrag utilities while SQL is offline, and those options still suck) the DBA shot himself in the foot.

      Reply
      • Point taken, Brent, and I’m aware of all those issues.

        Still doesn’t mean that we don’t encounter this need all the time: the application team that finally purges the 10-year old data we’ve been bugging them about; the rogue developer that tries to copy a 300m row table to “see if he can reduce blocking”, the 700GB database that needs to be chopped down to 50GB after restoring to the DEV environment, the pager going off at 2:30 in the morning because there is only 9.9MB free on your data drive; the analysis server that alternates huge jobs between two huge databases, but with only enough “working space” to grow one at a time; the “archive only” database that has 250GB of internal free space, and the business unit that gets charged by the GB for storage, etc…

        None of these are “good”, certainly, but the fact is that these are daily realities, at least in my environment. If we had a “better” shrink utility, we *at least* wouldn’t have to deal with the shrink-increases-fragmentation issue, although the other items you mention are still something to remain aware of.

        I guess I’m really advocating (mostly) the same thing you are: DBA knowledge and awareness of the problems/issues with shrinking, and a careful and selective application of it only when needed. Absolutely, the article you link does *not* contain the level of awareness and warning that should be included.

        As much as I respect Paul R, his “never, ever, ever shrink data files, no matter what, ever” philosophy, while laudable, is simply not practical. What are you going to do, tell your paying client, “Sorry, I know you are running out of space for your database BigData, but I can’t run a shrink on BigData_Archive to keep your server up and running. The drive will fill up at 2pm, and your website will be down. Thank you for your business, your invoice is in the mail.”

        Reply
        • Brad – again, if you’re doing daily shrinks, I would argue that you need to address the problem, not the symptom.

          Reply
          • Sure, I’ll agree to that. How about, “Use data shrinks only in highly exceptional circumstances, only after considering other better options, and only with full awareness of the problems it can cause.”

          • Brent,
            It might be just me but I do find the tone in your statements along the lines of ‘you need to address the problem’ and ‘you shouldn’t be a dba’ to be a bit condescending.
            You are probably right (as a DBA DEMIGOD that you obviously are), but not everyone is at the same stage of DBAdom as you are. Not everyone has the luxury of being able to ‘just add more disk space’.
            To add more disk space to our production server would currently cost circa £50k and take a month. Yes we are moving to a more cost effective platform. Yes our current hosting company are terrible.

            We cant fix these things over night, Shrinking the file is the only way, short of deleting data that is actually required, that we can be sure that the database isn’t going to go blam.

            BradC put it quite well.

            Perhaps you can suggest an alternative to those that can’t add more disk space and cant delete data? What would you do? Maybe even a quick explanation to those that need help?

          • Yumster – if you can’t delete data, shrinking the database doesn’t fix the problem. Shrinking the database only helps when there’s free space. If you’re adding data and not allowed to delete it, there won’t be free space.

  • on editorial control… I just took a look at the masthead and of about 50 names, I count 26 names alongside titles that include some form of “editor” (one of those names appears as an author of one of the posts you list).

    I don’t know much about the journalism industry but it’s hard to figure out who dropped the ball.

    Reply
  • Brad

    In your Scenario:

    “the application team that finally purges the 10-year old data we’ve been bugging them about; the rogue developer that tries to copy a 300m row table to “see if he can reduce blocking”, the 700GB database that needs to be chopped down to 50GB after restoring to the DEV environment…”

    In (most) situations like this something “major” has happened to reduce the used database space so much.

    If you have a database that goes from 700 GB of data down to 50 GB of data wouldn’t you rather create a shiny new database (sized appropriately) and just transfer the 50 GB of data there rather then delete/remove 650 GB of data and shrink (and fragment) your database?

    Reply
  • I don’t say never, ever to do a shrink – I say don’t do *autoshrink* ever and try as hard as you can to avoid doing a shrink, unless there’s an overriding reason to do it (e.g. deleting 500GB from a 1TB database and then making it read-only forever). You can do a shrink by rebuilding indexes into a new filegroup, and then dropping the old one – or if you have to do a shrink, make sure to tidy up afterwards using alter index reorganize.

    My blog post at http://www.sqlskills.com/BLOGS/PAUL/post/Why-you-should-not-shrink-your-data-files.aspx clearly says all of this – I think you maybe misread autoshrink for shrink.

    It’s just not possible to write a smart-shrink outside of the Engine. I had a design for a smart shrink for 2005 but it was canned as not being important enough. It won’t ever happen IMHO.

    Thanks

    Reply
  • I’m glad that you posted this. Being a relatively new DBA, I was unaware of what I was doing. I perform reorganize index operations on one of my production servers every week and my largest database always experiences a lot of transaction log growth. To mitigate the problem, I had to constantly shrink the log file and back up with truncate only to save disk space. This was necessary because my company didn’t have any other place to store my SQL backups, so they were being stored locally and I risked consuming all of the disk space. Luckily, we just bought a SAN and I’ve got all the disk space I need. So, I’ll let my log file grow from now on and I’ll stop truncating.

    Reply
  • As an Oracle DBA, resizing tablespaces is something that is normal to do as an administrator. And I think managing the allocated space for a database is something that should be done.

    I’m not suggesting that this should be a normal, automatic process… I would just expect there to be a better tool for resizing allocation for a data file in case it had to be managed. For example, I began to support SQL Server at a company I worked for in the past. A database was in Full recovery model but was not being backed up. The transaction log file grew to be about 100 times the size of the database. What would you do in this case, if shrinking the file is such a bad practice?

    Reply
    • Pepe – the problem there isn’t the shrink, it’s NOT BACKING UP YOUR DATABASE, hahaha. First things first – start backing up the database regularly, then shrink the log file once and be done with it. You don’t need a fancypants automated tool to fix the shrinking problem there. You need to be more aware of the very basics, like making sure the databases are being backed up. Concentrating on the shrink part is the symptom – you need to fix the underlying disease.

      Reply
    • Even if you are running proper daily “full backups”, when a database is in “full recovery mode,” it needs frequent transaction log backups performed. That, or switch it to Simple recovery mode.

      Reply
      • Agreed. And if you’re doing regular transaction log backups, don’t shrink your transaction log file just to free up space on the drive. Your LDF files are growing because the SQL Server needs the space, and it’ll just keep regrowing the files again and incurring a nasty performance penalty each time.

        Reply
        • Especially if the log growth is set to the horrible default (10%). That means each time it grows, it costs a little bit more. And all transactions have to wait for it to finish. Enter Jeopardy! theme music.

          Reply
    • Agent_Pepe, nobody is saying that there aren’t anomalies and that you should never, ever, shrink any file. Your case was definitely an issue, but once the issue is corrected (e.g. start backing up the database and the log correctly, and then adjust the file to an appropriate size), the problem goes away. The main problems highlighted are:

      1. shrinking is often a knee-jerk reaction to something that is actually normally expected of the database (to grow!), and trying to shrink it to gain disk space back temporarily does more harm than good. If your data file is outgrowing the disk you’ve allocated, the solution is simple: get more disk. Obviously there are situations where it may be necessary (including those mentioned by Brad), but shrinking a data file should be an exception and not the rule.

      2. databases should NEVER be set to auto-shrink (I am not sure why that capability even still exists today; too bad Paul didn’t have more pull back in the day)… this also goes for your own home-grown jobs that automate file shrinking, not just the stock checkbox on the database properties dialog.

      Reply
  • Wow, usually I enjoy reading your perspective on SQL, but it sounds like you need to take a breather over this.

    In an ideal world, all databases should be backed up normally and this shouldn’t happen. But haven’t you ever been pulled into a situation where a DBA that was previously responsible for a database hasn’t been doing their job? And you told me to backup the database, then shrink the log file. I thought this whole article was about how shrinking files was such a terrible idea.

    I think we agree on the idea that this should not be the solution to a problem. But this is something that has to be done in certain situations. And it doesn’t always work. That is why I think that shrinking data files in SQL Server should go much more smoothly than it currently does.

    Reply
    • Again, as has been stated several times, NOBODY IS SAYING “NEVER SHRINK FILES!” – they are saying, don’t do this automatically, and only do it when you have an emergency of some kind. The SQLMag article that spurred Brent’s post in the first place suggested implementing a stored procedure that went through all of your databases and ran shrinks against the ones that his logic deemed “could use shrinking.” This is just ludicrous.

      Reply
      • Understood. Then maybe the title of the article and most of the contents should be rephrased.

        I just haven’t always had luck with the Shrink utility when it HAS to be done, leading me to believe that it doesn’t work like it should. I’m not trying to sound like a jerk here… I always appreciate the material on this blog. I just think that Microsoft should make this work a little better than it does.

        Reply
    • Right, I’m with Aaron. There’s nothing wrong with shrinking files once. *Once*, when you take over the database if it’s been horribly mismanaged.

      But building an article about how to automate shrinks more often and more regularly, that’s a problem, especially without any mention of the aftereffects. It’s like saying, “Here’s how to cut all the trees down in your yard as fast as possible” without talking about the basics of making sure the tree doesn’t fall on your house, or talking about when you should and shouldn’t cut trees down to begin with.

      Reply
      • In fact – I would even go as far as to say that in the situation Pepe describes it is better to shrink that log file… (ONCE).. Get into a backup strategy that actually gives you the benefit of being in full recovery mode, perform your log backups, get the used size of the transaction log down.

        Then shrink the log file to the smallest size possible and then grow it ONCE to the size you believe it needs to be based on your recovery strategy, data volumes and log backup frequency.

        Doing this will help you control your Virtual Log Files which could have a performance impact.

        A log file that is grown in small chunks frequently or that spent its life in a shrink-grow-shrink-grow relationship with an absent minded DBA will tend to have a lot of VLFs and this may impact performance. I guess less so in more recent versions but it is still a practice I follow.

        98% of the time when I see shrink being used or suggested it is for dead wrong scenarios. All too often it is after someone says backup your log with no_truncate to clean up the log file from a Full Recovery mode database with no log backups… The right answer there is to pick a recovery model that matches your recovery needs and then do backups that match.

        Funny timing, Brent… I just got done banging my head on the desk here due to a new instance I inherited from our sysadmin, involuntary DBA…

        Reply
      • I can’t help but have to reply to this old article and comment.
        You said there’s nothing wrong to shrink the file once and give example about people posting article saying, “Here’s how to cut all the tree down fast” without talking about the basics. Please take a look at your article title. “Stop shrink your database files. Seriously. Now.” It’s like you are saying “Stop cutting your trees. Seriously. Now.” You have to put yourself in other’s shoes before you advise something. You don’t know why they need to cut down their trees. What you can do is to advise the basics, the effects, better solutions before you tell them to stop what they are doing. After reading through your article and comments, it sounds like you are on a rage. Please, there are better ways to help others.
        Thank you for sharing.

        Reply
  • Jeff Weisbecker
    September 25, 2009 11:09 pm

    Just bumped into this. Love it!! You forgot to mention the fragmentation that is introduced by shrinking the files. That shrink command isn’t moving extents around, it is moving pages. Try running your style of a fragmentation report before and after a file shrink and see what you get.

    Reply
  • What would webmasters do if database size is limited in hosting services? For example: 120MB before shrinking and 18MB after.

    Reply
    • A-K – great question! Typically hosting companies will set the growth and max size numbers for a database so that it can’t outgrow your hosting plan limits. Your best bet is to pre-grow the data file to your full size limit and then you’ll never have to worry about growing or shrinking.

      Reply
  • Let’s not forget, however that shrinkdb is a usefull tool despite the fact that it is often abused. I use it whenever I migrate databases from production to qa and dev less they don’t fit. I’ve also used it after partitioning to reduce the size of backups.

    Also it seems that this whole concept of auto growing files until you fill the disk and you run out of room is strange to oracle. The oracle guys at work don’t understand that since best practices seem to be to allocate all the space that you possibly want for the file and lock it down from autogrowth whatsoever. Obviously this would mean you woulodnt shrink either. I’ve used this strategy often, it’s ideal for tempdb.

    Reply
    • Sean – I agree with you on most of it, but I’m curious about this one:

      “I’ve also used it after partitioning to reduce the size of backups.”

      I’m curious – do you use any backup compression products like LiteSpeed? If so, they don’t back up empty space in the database, so it won’t matter whether the database is shrunk or not – the backup will still be the same size. I haven’t done a native backup of a large database in quite a while, so I’m not even sure if this is true with native backups either.

      Reply
      • Brent, take a look at my resonse to GilaMonster’s post below. In this case we are talking about optimizing a database for network copying and restoring remotley. After some tables were ‘partitioned’ (i.e. relocated) there was space in the database left on mixed data pages. Shrinkdb does fix this and will make for a smaller backup.

        I cant get my infrastructure guy to spring for litespeed and 2008 is still a year a way for us, so no I dont the benifit of streaming compression during the backups. Instead there are ugly scripts that gunzip them after the backups are complete. =(

        Reply
    • How does shrinking reduce the size of backups? A backup only backs up used space, free space within the data file is not included in the backups.

      Reply
      • I knew this question was next. =) In my case, I had migrated some tables to a new database, replaced them with a view that points to the new db and then shrinkdb. It was useful in this case to rearrange the data pages after large deletes. This turned into a sort custom partitioning scheme on 500 gb db that is backed up and copied to a test environment periodically. This provided a smaller db footprint, and yes a smalller backup that was easier to copy over the network.

        You are correct in that a backup does not include the empty space at the end of the file. That is true. But after large deletes where free space is now scattered over mixed data pages, only a shrindb will free this space. And without it , yes your backup is larger.

        Reply
        • Mixed data pages or mixed extents?

          It’s not just space at the end of the file that will be left out, it’s unused extents anywhere in the file. Only way you’ll get space saving in a backup after a shrink is if you have lots and lots and lots of partially allocated extents (blocks of 8 pages with < 8 pages used by the table).

          Between the shrink and the index rebuild that you need to do after, you're probably spending more time doing the backup than if you just backed up normally.

          Reply
  • Sean – what do you mean by ‘mixed data pages’? Shrink does not *ever* change the contents of pages that it’s moving – except in the single case in 2005 onwards where it will compact LOB column storage space potentially. It will not reclaim space from data pages that have free space on them. Trust me on this – I wrote the DBCC code.

    Do you mean that it reclaims empty pages from extents, thus compacting the table/index? Yes, it will do this, but at the expense of causing horrible index fragmentation.

    To address another point – a backup not only does not include empty space at the end of the file, it also does not include unallocated extents spread throughout the file. An extent is only included in a backup if one or more pages in the extent are allocated.

    Thanks

    Reply
  • Hi Brent

    OK, I’m convinced. I’ve removed all the Shrink database statements from all my maintenance plans. And when I check the DMV, I see that the indexes are indeed horribly fragmented.

    Now I have a problem. My senior DBA says I should leave well alone – no-one is complaining and if it ain’t broke, I shouldn’t try to fix it. Plus unless Microsoft says that Shrink Database is a bad idea, he doesn’t believe it. Can you point me to a MSDN source to back yourself up?

    Reply
  • Carlton B Ramsey
    November 3, 2010 12:39 pm

    I just had another vendor that I work with suggest that I really should have a sunday maintenance plan that will go out there and shrink the database.

    That being said. I am in the process of shrinking the database. Howerver, it’s for the only valid reason I know. To reclaim space that will never be used again. I have enabled data compression on the tables and have freed up 84 GB…120 GB to 40 GB being used. However, now I’m wondering how long it’s going to take to finish the shrink process currently at the 3.5 hour mark. Thank goodness this is a test system. I saw a similar post out there that took someone else 1 day 20 hours and some odd minutes.

    Reply
  • Brent, when are you going to write the “How to Reduce Your Backup Times with the Truncate Table Command!”. I bet a lot of people will love to read that one 😀

    Reply
  • I routinely have to shrink my database in order to sign-in to a specific program [i.e. Oracle Primavera P6]. So, I was scouring the internet to find ways to automate this process when I ran across your article. If I don’t shrink my database every two to three days, I literally can’t sign in to the program. Now, I’m very confused.

    Reply
    • Carter – what’s the problem that happens when you can’t sign in? I’m guessing you’ve got the database in full recovery mode and you’re not doing transaction log backups, so the log file fills up and the database stops. In that case, you’d either want to start doing transaction log backups or put the database in simple mode.

      Reply
  • My knowledge of SQL Server is scant. However, I am working on a data warehouse where all the data will be replaced every night (long story, not relevent to this topic).

    I asked our dba if this will cause the database to bloat, like MS Access would. He suggested that it would, in which case shrinking the database would have merit.

    What is your opinion on this matter?

    Reply
    • Hi, Dan. That’s a totally valid question, but the data churn rate won’t affect this issue. If you need X amount of space every night, then you want to pre-grow the data and log files to the correct size, then have them grow in appropriately sized increments, and leave ’em there. The bigger problem for SQL Server is the repeated growth and shrink.

      Reply
  • Paul Berberich
    June 20, 2012 11:37 am

    It seems to me the SQL db structure should be re-engineered to self grow and shrink properly/efficently. Perhaps in another couple decades Microsoft will figure that out..

    Reply
  • Here’s a thought if shrinking database files is such a bad idea, then microsoft should get rid of the shrinkfile command
    There are best practices and then there is real world.

    KPB

    Reply
    • Kim – that’s like saying since people keep sniffing glue, we should get rid of glue. Glue’s a good tool if you use it for the right things, just like shrinking files is.

      Reply
  • Juel von Wasgenstein
    August 9, 2012 7:31 am

    I respectfully disagree.
    Valid scenarios exist where it is desirable to reclaim disk space, perhaps when a database has been purged of a large amount of data then it is silly to still use all that expensive enterprize storage under a empty database file.

    Reply
    • Juel – absolutely, if it’s a one-time thing like purging data once, that makes sense. But if it’s an annual purge (say a data warehouse that only keeps 5 years of data, and you delete the 6th once per year) then why shrink it every year – when you know the data will just grow back?

      Reply
  • how about databeses stored on a big SAN? fragmentation is not a real problem…am i right?

    Reply
  • Ok reason i say AM I RIGHT is becasue i am not sure…but…as far as i know on a SAN data is spread all over disks and we don’t really care where the data blocks are. Sometime we find blocks of datafiles and blocks of logfiles in the same spindle. Brent…this is all i know about data on SANs.

    Reply
  • Hi Brent… I agree the shrinkfile is abused and inappropriately recommended.

    I am currently resizing my log files to reduce internal log fragmentation. During the process I do use shrinkfile to “purge” the fragmented VLFs and then grow the file back to an appropriate size. Would you consider this an appropriate use? If not are there other options to this sort of maintenance?

    Reply
    • Nick – yep, that’s a great one-time use for shrinking. Of course, you don’t want to regularly automate this – if you find yourself doing it repeatedly, size the logs correctly from the start, and leave ’em there. Hope that helps!

      Reply
  • Dear Mr Brent,
    When I shrink my database, my computer run out of memory and SQL shrink error, after that, I shrink my DB again and it run success. But after that, when my application connect to this DB, it run too slow, and my DB grown very fast.
    Can I rebuild index on this DB?

    I have another DB and I compress both table and index on this DB. So, can I rebuild index on this DB? Is my DB grown too big again after I rebuild index?

    Thanks

    Reply
  • Allen McGuire
    November 8, 2012 2:25 pm

    If you are strictly talking production, I agree. However, we have roughly six non-production replicas of production and there is a shit-ton of logging data that accounts for about 80% of the size of the database. Therefore I do find shrinking a fact of life in order to conserve some space and perhaps improve the performance of those environments on lesser hardware.

    Reply
  • I completely agree, I am not a fan of auto shrink. That being said, I’m currently shrinking an entire database right now. The situation is that the database is 1.9TB and when converting from 2005 to 2008 we are using page compression for the majority of the database. We will shrink after the database has been compressed and then we will reindex everything that we break in the same stent. This is a situation I find shrink useful for. overall the biggest fight I’ve had is developer pushed shrinks and then complaints about indexes (unique situation). This has been resolved and the healing can begin.

    Reply
  • Hello, I’m new to Sql Server and am very intrigued by all I’ve read here. I’m building an app that uses Sql Server Compact 3.5 as its db engine. Because of the 4GB size limitation, I had planned to periodically purge older data, followed by the shrink operation described here: http://msdn.microsoft.com/en-us/library/ms171752(v=sql.100).aspx.

    Excuse my ignorance, but does everything I’ve read here (particularly the index fragmentation effect of shrinking) also apply to SqlServerCE? If so, since there is no ‘defrag’ method available for SqlServerCE, what practical options exist to manage the database size?

    Reply
    • Vic – hmmm, I’ll be honest – I haven’t done any SQL CE work in almost a decade. Is there a reason you’re not using Express Edition instead?

      Reply
      • Thanks for your reply, Brent.
        Our application is designed to interface with and collect data from an external USB device and may be run on mobile platforms. Because off this,Sql CE seemed approriate, being lightweight and supporting a simple private deployment. I mentioned the 4GB DB limit, but in reality we’re looking to keep it well under 200MB by purging old data as it becomes stale.

        Since Sql CE is a file-based DB that doesn’t support clustered indexes, I’m in doubt as to whether the above discussion really applies here. At the same time, performance degradation is a top concern. Hoping maybe Paul or others could shed some light?

        Reply
  • I just had a visit from a supplier that created three user databases on my server. It’s really just small change, each db is hardly a GB, but because of wrong settings – that I should have kept an eye on, but didn’t – the log blew up to almost 1 GB(only 14 MB used). Three times the size of the datafile. I do have enough room on the disk for the transaction log (disk is 3 GB), but I like to keep my databases properly sized.

    I’ll probably leave it this way, but would this be a situation where you could use shrinking? Or is it still better to setup new datafiles that are properly sized and copy the data? Btw, I never shrink databases, and I do have weekly maintenance tasks to rebuild and/or reorganize indexes for all user databases.

    Reply
  • I have an interesting situation regarding shrinking a data file. I ran into a situation where we hit the maximum value on our primary key field, which was an auto-incrementing value of the int datatype. To address the issue, I created a second table with the same basic definition except now the primary key field is a bigint. I then issued an insert into … select command to move the data from old table to new table, using with tablock to avoid growing the transaction log (the database is in simple recovery mode). Once I got the data in the new table, I was able to drop the old table. the size of the old table was 37GB. So, in the process of temporarily duplicating the old table, I added an extra 37GB to the data file. Once the new table was in production, I dropped the old table but this, of course, did not release the 37GB back to the OS. While I realize the bigger mdf file doesn’t affect backups I would like to release the 37GB back to the OS, as I don’t necessarily have an infinite amount of space on disk and I have other databases using that disk that are also growing.

    What’s the best practice here?

    Reply
    • Dennis – obviously the best practice would have been to avoid the problem initially, but it’s too late for that now. 😉 When repairing something, we sometimes have to make a one time exception.

      Reply
  • Karl von mecklenburg
    December 31, 2013 4:38 pm

    In this article it states shrinking database will resolved index fragmentation which was covered today in webex seminar
    http://rusanu.com/2010/03/31/the-bizzaro-guide-to-sql-server-performance/
    In this article I beleive iam getting a different message.

    Is it the Version of SQL that makes the differnce or am I missing this all together?

    Reply
  • James Anderson
    January 21, 2014 6:16 am

    HI Brent,

    I have a large database that has been ignored for a long time and one log table has grown to consume 95% of the space used by the datafile which is 120GB. After asking I have found that we only need to keep the data for 3 months. Removing the old data has given me almost 100GB of free space. I have implemented a process in my maintenance plan to trim this log file every week.

    Looking at my backups I see they are only 10GB (Compressed backups). Would I be correct in assuming the cleanest and quickest (least amount of downtime) course of action would be to drop the database and then restore the backup file and resize appropriately?

    Reply
    • James – this is kinda beyond the scope of something I can answer quickly in a blog post comment, but no, restoring from backup doesn’t sound like a good idea. It sounds like you’ve got a few different problems in there (data file too big, log file too big, possible internal & external fragmentation, etc) and I can’t really do justice to that in a blog comment.

      You’re better off posting this in a QA forum like http://dba.stackexchange.com or http://sqlservercentral.com with as much specifics as I can, and the problem that you’re trying to solve. Thanks!

      Reply
    • I often restore production into non-production environments for our developers, upon which I truncate a very large log table as well. A shrink on the data file should do the trick just fine for you – if you always have a rolling 3-month volume of logging data, you should not have to do this more than the first time since the size of that table should be relatively static. I would personally do the purging in smaller, more often chunks, however – that will ensure you don’t bloat your transaction log during a very short window as probably happens now.

      Note: if you have Enterprise, you might look into getting an estimate on compression to further reduce the size of your table (indexes) and database – but that’s a different topic and has some caveats.

      Reply
  • James Anderson
    January 21, 2014 6:17 am

    “I have implemented a process in my maintenance plan to trim this log file every week.”

    By log file I mean log table. I do not shrink my log files.

    Reply
  • Ok, so I now know that I should avoid shrinking databases and files if at all possible. It’s not possible. I have a SQL server supporting a SharePoint environment and we’ve implemented BLOB Storage. Upon updating our EBS provider software we had to import over 1TB of data into SQL and then externalize it back to disk. So I am trying to free up over 1TB of disk so I can shrink my Windows disk volumes and return the disk space to our VMWare admins.
    So what is the best procedure to safeguard against the potential ills and enable a server admin to reclaim a bunch of disk space?

    Reply
    • Tom – in that case, I’d check SharePoint’s best practices since that’s really more of a SharePoint question. Generally speaking, though, you can’t shrink Windows disk volumes online, so you’re talking about an ugly outage.

      Reply
  • Hello Brent,

    I am supposed to restore production databases onto development servers. Those dev servers have about 30% of the diskspace of their counterparts in production. The only way I’ve found to restore the production databases, is to purge everything older than three months and shrink the database files. That of course takes forever and a day. Any suggestions on how to avoid shrinkfile and still restore just three months worth of data? Keep in mind, that there are over 10 instances and about 7 databases per instance weighing from anywhere to 50GB up to 1TB.

    Sincerely,
    Craig

    Reply
    • I do a lot of production restores into non-production environments too Craig – the options are rather limited. I end up purging out some unnecessary data after the restore, then shrink it down to reclaim some space. There’s really no way to reclaim the space unless you want to pipe out the data into a fresh database (minus what you would otherwise purge) but that could be quite cumbersome. I also use compression in my development environment – that gets me a TON of space back (reclaimed 1.5TB last weekend alone compressing some AX databases).

      Reply
    • Craig – buy more database space. Seriously, space isn’t that expensive these days. (And if you’re using enterprise storage, then it’s time to get into SAN snapshots, which do this whole process with limited to no additional space.)

      Reply
      • I’ll put you in touch with my SAN Admin, Brent – and he’ll likely invite you to a subsequent board meeting 😉 He’s doling out space ALL the time, so when I CAN, I try not to bother him. I see absolutely no reason to not remove some data after the restore and shrink to preserve space. After all, I only restore our Dev/QA/Staging environments once every few months.

        Reply
  • Hello Allen, thanks for the suggestion. It just seems odd that MSSQL requires you to restore empty space. When restoring databases in MySQL and PostgreSQL, the indexes and heaps are reorganized and empty space can be reclaimed. Perhaps the shrink would be faster on the target server after restore?

    Reply
  • I restore from backup (on the target) and do the shrink on the target, as to not impact production. I then RDP to the target and kick off the shrink – that way my laptop doesn’t come into play. I do enjoy going home after work (with my laptop) and not watching a shrink run overnight 😉

    My shrink on development resources takes maybe 10-12 hours and that’s a 500GB database roughly.

    It is a shame that the empty space isn’t somehow cleared out during the restore – it’s something I tried various ways to overcome, but had no such luck. Restore/shrink it is.

    Reply
  • “Craig – buy more database space. Seriously, space isn’t that expensive these days.”

    You’d think SAN space/disk was some sort of Unobtainium when dissucussing it with Storage/SAN Admins. ;o)

    Reply
    • I agree Scott – we just got a new EMC last year and I’ve already been given 4TB of disk for my development server (quite a bit more than the 500GB I had when hired three years ago). Unforeseen acquisitions and projects have gobbled that right up (we went from three 500GB non-prod AX environments to about 12 now) so we have to do what we can as DBA’s to make the most of our resources. I use compression and shrinking as my primary method to make the most of what I have at this time – in non-production, of course. Never in production.

      Reply
  • I would agree that in normal everyday operations, you don’t need to shrink. But not all operations are everyday or normal. A lot of houses move their data files around without checking for things like consistency. Clone machines and move databases over. The log file connected with a database may no longer be relevant to the database’s current state. The inactive area might be related to bulk transactions that are no longer relevant to that database.

    And consider: If there was NEVER a reason to do it, why is it available?

    Are we supposed to engage in that other “Don’t Ever Do It” behavior, switching recovery modes?

    Reply
    • Mike – you can also buy unfiltered cigarettes. They must be a good idea, right? If there was never a reason to do them, why would they be available?

      Keep shrinkin’ them databases. I hear it’s great for your T-Zone.

      Reply
    • Mike – I think I understand what you are trying to say.

      To add to Brent’s great analogy, just because something is available – doesn’t mean it’s a good idea for our production databases (auto-shrink, auto-update stats off, DBCC DROPCLEANBUFFERS, etc.). These are tools we have for very particular situations. Shrinking a log file might be appropriate for a development environment, where the 80GB trans log is overkill (especially when you have dozens of such non-production environments). I also think it’s appropriate for “abnormal” situations, like a one-time data load/archival, for example (you alluded to that, so I concur). Consistency checks and/or index maintenance are by no means “abnormal”. But under no circumstance would I condone a routine shrink of my production trans logs.

      Reply
      • Yes, that was the point I was trying to get across. If you’re looking for disk space, dumb move. If you’re moving the database to a new instance, anyway, or have requirements that fit the profile, only an absolutist would sneer at the options. Or somebody who’s never actually worked with the nuts and bolts of recreating environments on an everyday basis. Oh, and by the way, I too advocate for people just buying the disk space. But if you’ve ever worked in government, you’d know it’s not cheap everywhere. We get charged by the Gigabyte, both for the virtual machine and disk space allotted.

        Reply
  • Brent,

    I’ve been creating a process to go through all our databases and analyze the log structure, backup, shrink, and resize appropriately taking operational requirements and VLF sizing/amount into account. My code is working beautifully, but I encountered something in testing today I wasn’t expecting.

    I have a good sized DB with a 104GB log and around 900 VLFs. I backed up the log and shrank down to next to nothing during a period of low activity. I then began to grow it back out to it’s original size in 8GB iterations, keeping VLFs around 512MB. During regrowth, I was curious about DB availability and started running random SELECTS against different DBs. Results returned without issue. I then created a table and looped a bunch of inserts.

    I was surprised when the log continued to grow and the table creation and inserts completed without issue or delay. I was under the impression the DB would somehow be locked down, blocking DDL and DML, until regrowth completed. Of course, considering the circular nature of the log, I can see how this would be a false assumption. But I can’t find anything on the web one way or the other.

    Could you possibly shed some light or point me int he right direction on what is going on? Thanks!

    Reply
    • Steve – this is beyond the scope of something I can answer fast in a comment. Sounds like a great question to ask on http://dba.stackexchange.com. Thanks!

      Reply
    • As Brent alluded to, that’s not a small topic, but if you haven’t yet – Kimberly Tripp’s post on the topic is one the the key posts I’ve seen on the topic.

      http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

      I personally have not had contention issues re-sizing my trans log when I’ve implemented the aforementioned advice.

      Reply
    • @Allen, I’ve read Kimberly’s blog several times which is what sent me on my quest to make all our logs health and happy. I’m just trying to find out what happens during the growth process with regards to DB availability to end users. I have a question on Stack Exchange as Brent suggested but I’m not getting answers there either, only more questions about how I can prove what I observed. While the questions may be legitimate, they don’t really help me answer my question – Is a SQL Server Database able to be written to during log growth?

      Reply
      • Steve – you can set up an experiment to prove that really easily. Start a database with a 1mb data file and 1mb log file. Start one window growing the log by, say, 50GB. In another window, try to do inserts/updates/deletes in that same database.

        Reply
        • Brent, that is exactly what I did monitoring the log and time stamping everything. I grew the log out in roughly 8GB chunks and in between each growth iteration, wrote to the same test table I was doing inserts on. That way I could see the time stamps and any delays. Each 8Gb chunk took roughly 20 seconds to zero out.

          What I saw was log growth did block writes…for like 46 ms! You can see the lock occur and then the writes started right back up again before the new portion of the log was initialized. That part was very interesting to see – the writes occurring in the active portion of the log while the new inactive portion was being zeroed out! Definitely not what I expected.

          Reply
  • Brent,

    I have an issue and have read all of the competing views on the subject and am hoping you can provide your take on the problem.

    Earlier this year I inherited a large database (500+ GB) of which 350GB are base64 encoded binary files (pdf’s, images, word docs, etc…) stored in of all things nvarchar(max) fields.

    The rearchitecture of the entire platform has me moving the software to Azure and I will be leveraging encrypted table and blob storage for that 350GB worth of files.

    Part of my migration strategy is that after the files are moved off to Azure and all of the software is leveraging the web service to store/retireve them, was to wipe out those columns in the 3 tables and then shrink the database, reclaiming a significant amount of disk space that will make the rest of the project much smoother (database migrations, portability, backups, etc…)

    However I have read all of the pro’s and con’s and am not sure the best way to tackle this. I cannot make a clean port of this (too much downtime) if I do not get some or most of the space back but downtime in the production environment must be kept to a minimum.

    Is there anything short of using shrink to retrieve the space and give me the flexibility I need?

    Thanks,

    JD

    Reply
    • Kendra Little
      January 2, 2015 2:57 pm

      Hi JD,

      I’m not Brent, but I was in the comments and this caught my eye.

      I actually don’t have any problem with one-time shrinks of a database if you have a lot of space to reclaim, but your situation is a little special. Dropping a column is a metadata only activity, but reclaiming the space is a bit more complex. You can get an idea of what’s involved by reading the Books Online Page for DBCC CLEANTABLE: http://msdn.microsoft.com/en-us/library/ms174418.aspx

      In your situation it might be faster and simpler to just create new tables and do a minimally logged insert into those new tables of just the columns you want, and then do a final backup and drop the other tables. (Or insert into a new database.) I just mean this from a speed/efficiency reason. The whole process of DBCC CLEANTABLE, then shrinking the files, then rebuilding indexes can just be painful and slow.

      Test it all out and just see which process works better for you.

      Kendra

      Reply
      • Kendra,

        Thank you for the thorough reply. Let me first by saying that while I have spent many years designing large scale software systems, the inner workings of true database administration are still not in my expert toolkit :).

        I have just spent some time reading up on DBCC CLEANTABLE and the part that is still not clear to me is whether or not it will simply free up space that can be reused OR it will actually shrink the size on disk. I suspect not but I wanted to confirm. One of my biggest concerns is the size of the database currently. Not from a cost perspective but in the difficulty of working with it, from the backups (using RedGate today) to the multiple development environments and if I can get the database down to the size it should be, even padding it significantly will make the rest of the project much more manageable.

        Thanks,

        JD

        Reply
  • What do you think about suggestion #8 in the following URL? Is ays to reduce internal log fragmentation (1) do a manual checkpoint if Simple recovery (2) do a ShrinkFile TRUNCATEONLY (3) ALTER TABLE giving the log file a specific size.

    http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/

    Reply
    • Kendra Little
      January 21, 2015 1:36 pm

      Hi Brenda, That post is about doing a one time operation to resize the transaction log– they aren’t advocating doing regular shrinks of your files.

      Reply
      • Sorry, I was not clear. It says that the three steps will reduce internal log fragmentation. Do you think it would reduce fragmentation?

        I am one of those shops that does a DBCC ShrinkFile(file, 1) for the data and log every night. Looking to stop that – obviously. But wondering what is the next step to then defrag the data,log,indexes for more efficient performance moving forward.

        Reply
  • OK, so this article and nearly every other article I can find says the same thing over and over, “don’t shrink, don’t shrink, don’t shrink”. Obvious next question is, why not? Everyone seems to agree that shrinking a database results in a heavily fragmented index, which is bad. So my, currently unanswered, question is this, “Is it still abhorrent to shrink a database if you follow the shrink with a defrag of the index?”. It is fairly trivial to write a SQL job to defrag an index, as far as I’ve seen anyway, so instead of telling everyone that shrinking a database is the absolute worst possible action you could ever do in SQL, wouldn’t it be more informative to say that yes shrinking is bad for indexes but if you defrag the index after the shrink then you avoid the world on fire scenario which results from shrinking without defragmenting?

    Reply
    • BH – let’s take a step back and ask, “What are you doing that requires the shrink/defrag activities?”

      Reply
    • To add to Brent’s question – if you are shrinking after doing maintenance every time, then why shrink to begin with, only to see it grow back to where it was? Just right-size it to support your operations (including maintenance – keeping VLF’s in mind if you so desire) and leave it be.

      Reply
  • One operation that I had to do and that requires a shrink was removing a ndf file from a database. When our dev environment was created, every user database had 1 mdf and 7 ndfs. A decision was made to remove one ndf file so that each of the 7 database files could be spread across 7 disk volumes. Removing the “now extra” database file, requires a shrink to empty the file.

    http://dba.stackexchange.com/questions/91719/dbcc-shrinkfile-emptyfile-not-emptying

    Reply
  • Steve Mangiameli
    February 19, 2015 8:17 am

    I think the misconception is that the shrink operation is sooo bad IT SHOULD NEVER BE DONE. Otherwise, to quote Dr. Venkman, “Human sacrifice, dogs and cats living together… mass hysteria!” Ok, not really, just understand what you are doing and why. There will be the exceptions. But shrinking is the result of something else not set up correctly – don’t just treat the symptoms by coding around them.

    Reply
  • I need to perform massive delete of rows on a table. I need to reclaim those free space. what’s the best way to do? Shrink Database or Shrink file (s)? I have one .mdf and three .ndf files.

    Reply
    • Kendra Little
      March 2, 2015 11:10 am

      If you must shrink, shrink file. It gives you more granular control.

      Reply
    • Be careful on how many rows you delete in one shot. I had a developer “DELETE * FROM ” not along ago (versus TRUNCATE – doh!). That resulted in a 150GB transaction log and since the process was blocking/would not complete, I had to kill his SPID. Then what? Rollback time… and the rollback would not budge – it was stuck at 0%. So I had to restart the entire instance, which got the rollback rolling – no pun intended. Four hours later the database finally recovered.

      So, when I hear “delete massive rows in table” – it makes me nervous. Be sure to do them in batches.

      Reply
      • what’s the best way to do it? would you mind writing a script here?

        Reply
        • Steve Mangiameli
          March 2, 2015 1:16 pm

          Dan, you want to identify something unique in the record and then delete a respectable range – 1 – 10000 or something similar. You can create a loop or use a -gasp! – cursor to iterate through each batch. The trick is to commit each batch so the log can be reused.

          Of course, If you’re going to be deleting the majority or all of the records, consider using TRUNCATE. That will reseed your IDENTITY to it’s starting point (if you have one) so be careful of that.

          As for a script…you should be able to figure it out based on the info here.

          Reply
  • So if you have a 50g log file that’s using < 1% of the space, created because someone ran a huge transaction that has since committed, and the logs have been backed up – you are saying that you should not shrink that file? I respectfully disagree.

    I would say the same thing if a large one-time delete was done and a database file now has a huge amount of unused space and you know that database is never going to grow back to the original size again.

    In both cases that space might be needed by another file in the future, and it wont be available.

    Am I wrong?

    Reply
    • Chuck – I think anomalies lend to special cases where you want to first identify the root cause, mitigate it best you can, and right-size your log based on historical metrics. I’m pretty sure we all do this under such conditions, but I can only speak for myself. When right-sizing, keep in mind the growth rates and be cognizant of your VLF structure/count within the transaction log. Kimberly Tripp has some great articles on how to best shrink and subsequently grow your log out to a good size (Google it).

      As for one-time deletes, when you need to do so, you should do it in iterations so your transaction log doesn’t bloat – not to mention such operations in a single statement are not kind when it comes to concurrency.

      Reply
    • Steve Mangiameli
      March 4, 2015 9:56 am

      Chuck it’s impossible to say you are right or wrong – I don’t know your business, your infrastructure, or your resource availability. But so long as you can guarantee no one will run a similar transaction in the future, or that the space created in the data file do to a purge won’t be needed in the next 6 months, then go ahead.

      Again, the point of this article is to address the automated or regular shrinking of files. It creates a lot of unnecessary maintenance overhead that is not great for your databases. If you reallllllly need that 50GB back, by all means reclaim. Just be aware of what you are doing, why you are doing, and how you are preventing the requirement to shrink again.

      Reply
  • okay can you tell me if, in this situation, it’s fine, or if i’m making a mistake in my logice.

    massive infrastructure + app upgrade, every-object-touched in the database style procedure. database is ~1.5 TB but we need to keep an read only copy of old database for support reasons. having 2 x 1.5 TB db’s (that are basically the same) seems wasteful and it won’t grow in any systematic way. what i did was blindly page compressed all tables and indexes (gained almost a TB which blew me away from a straight sledgehammer solution point of view) and now i just want to shrink every file to it’s minimum size. performance doesn’t really matter (as long as you can query it). is shrinking a good solution here?

    Reply
  • Given your description that it won’t grow, and you compressed to conserve space, it’s quite logical to shrink to realize those space savings. I do that quite often in such cases.

    Reply
  • Sean Redmond
    April 20, 2015 1:59 am

    Please tell me if where the errors of my way lie:
    We have a prod DB with a 95GB mdf and a 50GB ldf. We have a software release 3-4 times a year. We take full backups at the beginning and end of the release. Just after the initial full backup, I set the recovery model to simple. I don’t want the log to balloon out during the release (as it has done in the past). After the release I shrink the log (and only the log) down to 10GB. Thereafter, I take a full backup and we test the newly updated application and DB. The log will slowly grow up to 50GB in the weeks to come.
    My primary reason for shrinking the log is the space needed during restore. Restores are often made on other servers for a variety of reasons and we are not awash with space. A 110GB can usually be restored, not always so with a 150GB DB.
    On a final note, we do have 200 VLF files and I have long wondered how to reduce their number. The log file growth size was set to 128MB initially and I upped it to 1024MB. I am having a look at the L. Pesta’s SP sp_DBA_DefragLog now.

    Reply
    • Sean – if your log slowly grows back up to 50GB, that means you need 50GB of log space for your day to day operations. (Or you’re not doing log backups, one of the two.) As your log file needs to grow, operations in the database stop while SQL Server and Windows clear out that log file growth. The symptoms of this are unpredictable slowdowns and freezes in the application.

      Generally, folks want to avoid unpredictable slowdowns and freezes – so why keep shrinking your log and forcing that to happen?

      Hope that helps!

      Reply
    • Steve Mangiameli
      April 20, 2015 8:44 am

      Sean, 200 VLFs isn’t egregious and yes, setting your autogrowth appropriately will help keep that down. Maybe you can add a step in your release and instead of merely shrinking the log down to 10GB you could shrink it down to next to 0 and build it back out in 8GB increments until you are at the desired size. It would look something like the code below. I have a script that generates that code for me depending on the size and auto growth I feed it.

      USE YourDB
      DBCC SHRINKFILE (YourDB_log, 1) WITH NO_INFOMSGS

      SELECT DB_NAME(database_id) DatabaseName,name LogName,size*8/1024 SizeInMB
      FROM sys.master_files
      WHERE database_id = DB_ID(‘YourDB’)
      and type_desc = ‘LOG’

      –>Repeat above if necessary

      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 8192)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 16384)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 24576)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 32768)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 40960)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 49157)–a few extra bytes to avoid 4GB issue
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, FILEGROWTH = 1024MB)

      Reply
      • I see you’ve read Kimberly’s post Steve 😉

        http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

        That is precisely what I do – VLF analysts should be something we check on a pretty regular basis. I’ve found that doing so also controls the growth, not to mention performance.

        Reply
        • Steve Mangiameli
          April 20, 2015 10:56 am

          Yessir and reread and reread until I fully understood it. I now have a monthly job that looks across the enterprise and reports back anything waaaay out of whack. I use my scripter to create the required code to remedy the situation.

          We have some pretty robust hardware, so much of the performance impact is masked. However, we also use CDC – you know that thing that reads the log! – for a near real-time datawarehouse solution. We had a problem where the CDC log reader could not keep up in the OLTP DB and log reuse was suffering. The issue was 8k+ VLFs!!! The root of that problem was a poorly defined autogrowth setting early in the product’s life cycle.

          Once that was fixed the problem went away.

          Reply
          • You gotta love the default growth rate of 1MB – pretty insane, and if overlooked can really lead to some headaches. I’ve had databases in similar situations – tens of thousands of VLF’s. If this has been posted prior, my apologies but I don’t see it posted thus far:

            USE dbname
            GO

            DECLARE @db_name SYSNAME,
            @file_name SYSNAME
            ,@file_size INT
            ,@file_growth INT
            ,@shrink_command NVARCHAR(MAX)
            ,@alter_command NVARCHAR(MAX)

            SELECT @db_name = DB_NAME()
            PRINT ‘USE ‘ + @db_name
            PRINT ‘GO’

            SELECT @file_name = name
            ,@file_size = ( size / 128 )
            FROM sys.database_files
            WHERE type_desc = ‘log’

            SELECT @shrink_command = ‘DBCC SHRINKFILE (N”’ + @file_name + ”’ , 0, TRUNCATEONLY)’
            PRINT @shrink_command
            –EXEC sp_executesql @shrink_command

            SELECT @shrink_command = ‘DBCC SHRINKFILE (N”’ + @file_name + ”’ , 0)’
            PRINT @shrink_command
            –EXEC sp_executesql @shrink_command

            DECLARE @i INT = 1
            WHILE ( @i <= (@file_size / 8000) + 1 )
            BEGIN
            SELECT @alter_command = 'ALTER DATABASE [' + @db_name + '] MODIFY FILE (NAME = N''' + @file_name
            + ''', SIZE = ' + CAST(8000 * @i AS NVARCHAR) + 'MB)'
            PRINT @alter_command
            –EXEC sp_executesql @alter_command
            SET @i = @i + 1
            END

  • Sean Redmond
    April 20, 2015 7:25 am

    Hi Brent,

    Fair enough, I suppose.
    The log will go through 40 grow operations until reaches its stable position.

    Thanks for the swift reply.
    Sean.

    Reply
  • From Paul Randal’s article referenced above: “Although I used to own the shrink code while I was at Microsoft, I never had a chance to rewrite it”

    So there is bad code in a mission-critical product used by millions and the guy in charge “never had a chance” to rewrite it.

    That is how it goes in the real world. I just shrank a datafile and I am not sorry. I have a bunch of other dbs on that server and this one was sucking up all the drive space with 200GB of free space. (Altogether, the db takes 1.2 TB).

    Yeah, it should have its own server, and it should have this and I should do that and there should be a person employed here who has time to do such and such. But like Paul at Microsoft, the real world doesn’t work that way.

    I mean no disrespect – I am a fan – but let’s stop with the “You’re doing it wrong; you’re an idiot” articles. Especially when they don’t stand up to real-world conditions.

    Reply
    • Jon – so, honest question: if you just shrank a datafile, what brought you to this web page to leave a comment? I’m honestly curious.

      Reply
      • Always looking to learn. Today was a typical scenario. I had a particular issue to fix and when you are in the middle of working on something, that is always a good time to develop more expertise on the subject. In this case, shrinking data files.

        BTW, I did take your advice to heart and will be careful in the future to avoid shrinking data files unless absolutely necessary. My objection was to the dogmatic, holier-than-thou tone.

        Reply
        • Jon – gotcha. If you read a few more of my posts, I’m confident you’ll find that I rarely take that tone – except when I’m dealing with extreme situations like you read about here in this post. Enjoy!

          Reply
  • All of these instructions to “never” use or do something is taking an extremist perfect world scenario or view on the “cause and effect” issue.
    There are going to be cases where the IT leadership is less knowledgeable about why and what to do than the DBA trying to salvage an optimum environment; meaning there may be contention betweeen doing the “right” or “best” thing and doing the necessary thing.
    Anyone suggesting there is only one path to a solution because of inherent risks that could be overlooked or that saving of considerable capacity may not be an appropriate short term path while problem analysis is completed is simply doing a disservice to anyone who is a DBA.
    No.. no-one should be “handing in their DBA card”… there are reasons these tools exist and caution should be applied, but seriously everyone doesn’t work for good IT leadership organizations.

    Reply
    • Jon – understood loud and clear. Your management isn’t good, so you need to shrink databases. Got it.

      Reply
      • Thumbs high!..

        Reply
      • Yeah, it would be unfortunate if management was suggesting you perform an activity that isn’t ideal. Even if I purge/archive data in production, I typically leave that space alone for future growth, which is bound to happen. In non-production I may try to reclaim some space – Dynamics AX has logging tables (SYSDATABASELOG, for example) that we truncate when doing non-prod restores from production, so in that case I may want to reclaim that space since we have five non-prod environments (the sum of which is well over 1TB).

        Reply
  • I think there are cases for shrinking a data or log file, and yes, as a once or twice off.
    Once to monitor when the grow happens, and be able to do detective work, and the second time once the solution has been applied.

    This article, while useful, is like telling people not to spank their children because of all these bad effects , i.e. fragmentation(psychological damage), without giving positive reinforcement parenting methods. These people still have problems at the end of the article which they may not have the power to solve.
    Telling people to size the TLog properly when they might be doing silly things to the DB is not helpful either.

    What would be great is an article that looks at ways of reducing the size of transactions, because Transaction sizes and frequency affect the TLog size.
    I don’t see these being blogged about.

    One example is a process which does a delete (i.e. fully logged) everything and insert everything into a table.
    This will log more heavily than doing a Merge which in some cases may end up adding just one row and modifying 1% of the table instead of 100% of the table.
    Another example is a DBA who has a rebuild-all-indexes-everyday job instead of having something more clever which works on thresholds.

    But yes, that is why we need an article with good ideas of how to reduce TLog activity, or spread out TLog load instead of the mandatory, ‘don’t spank the kids’ article.
    p.s. I have looked at the links in this article, and also tried to find info on the great web and did not find ways of reducing TLog bloat other than ‘backup TLog more frequently’ which ultimately helps very little.

    Reply
    • There are always exceptions, but I think Brent is largely talking about data files, not trans logs. I tend not to compare what I do as a DBA to what I do as a parent 😉

      Reply
  • HI Brentt,

    I have a question regarding index rebuilds AFTER one “might” do a ONETIME data file shrink after a monster purge. You mentioned earlier in a reply that “so if you shrink the database and remove all free space, and then you rebuild the indexes afterwards, do you understand that that re-grows the data file right out to make space for the newly created index?” It was my understanding that if you do a rebuild, that SQL really drops the index and then rebuilds it. If the orginal index tookup 50 MB, why would there be an additional 50 MB needed? I would think it would just use the space that was originally there?

    thx

    Reply
    • MG – about your understanding – think through what would happen during an online index rebuild. Then, zooming out, do you remember where you got that understanding from?

      Reply
      • Hi Brent, To the best of my memory, I think this comment was given to me during a sql class literally years ago and might have been tied to offline index rebuilds. Not positive. But I did find it on a MS page about the rebuild:

        “Rebuilding Indexes
        Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction.
        This is also where I confirmed it.”

        Yes, I have reviewed the MS doco on ONLINE and do understand the massive over head on the rebuild while keeping it online. In our specific case, we are using SQL 2008R2 STANDARD and are not using online rebuilds – offline.

        With that said, the question then arises that how much more overhead is consumed on the rebuild index of an offline index?
        Thanks again.
        MG

        Reply
        • MG – bingo! So now it’d be really easy for you to test on your own environment to see what the space is before, during, and after. (I’m a huge fan of having you run your own experiments so you can learn by doing.)

          Reply
          • Brent, Will Do. 🙂
            To be honest, I just uncovered an article from MS about “Dropping and Rebuilding large objects” and the overhead that is incurred when rebuilding index on these., it appears, regardless of offline/online, there is a tremendous amount of work that goes on during this. I can already see why it uses extra space 😉

            Will do my test. Thanks for your observations.

            MG

  • tanks admin for article

    Reply
  • I have finite space available and a database with all tables and indexes partitioned by month. There is enough space on the drive to keep 6 months of data (+1.5 months to give us some fudge room).

    Each months data files end up being about 110 GB. After 6 months the older partitioned tables are detached from the database and moved to an archive database. If I leave those recently emptied partitions at their size and do not shrink them, then I run out of drive space over the next few months.

    Your recommendation? Assume more drive space is not an option, I don’t think I could sell asking storage for nearly a terabyte to hold empty data files.

    Reply
    • George – it sounds like you might be using different filegroups for each partition, and you’re leaving empty filegroups around. What are you gaining from different filegroups?

      Reply
      • Admittedly the why of it may be a little over my head as some of this process is controlled by the 3rd parties built in database management piece. My running assumption has been that this is done to facilitate the quick detachment of old data as well as all the other benefits of having partitioned tables and indexes.

        During this detach process new tables/indexes are created prefixed with the year and month then the partition switched to point to the new table.
        (ALTER TABLE My_Table SWITCH PARTITION 2 TO P201509_My_Table)

        Those tables are then moved to the archive database, but the datafile for that partion is reused the next time that month of the year rolls around. So if we don’t shrink them, they stay huge but contain now actual data for about 6 months.

        Reply
        • Right, but you don’t need separate filegroups for that. It sounds like you just need to switch to one filegroup for the data.

          Reply
          • After digging around some and talking to people more involved in the setup of this system. It sounds like they went with different file groups for HA reasons. At any rate that isn’t something that will probably change in the immediate future.

  • Not sure that ‘disk space’ is cheap is a valid comment. It’s dependant on the environment you are working in, AWS, VM, SAN, they all have different costs, and dependant on usage, can add up to a pretty penny.
    Sometimes the architecture design can outgrow the initial requirements, and a re-think is required.
    Many a time we have planted small acorns, which have developed into massive forests of data.
    Yes, shrinking can be bad, but asking that the whole project be re-architected, as it has morphed into something that was never anticipated can be even more expensive.
    Which then brings up a new discussion, what to do when the project design was completely off the mark?

    Reply
    • As soon as you realize that the design was off the mark, you need to review it, pretty much from scratch, to see what impact this has and what can or should be done about it.

      Your review may determine that you do not need to make any changes but, if you do need to make any, it is almost always easier to do this sooner rather than later.

      So, think about why the design was “off the mark”;

      Is it just that people are finding the tool so useful that they are doing more things with it and the initial design wasn’t “wrong” so much as simplistic ? Adding some formal flexibility (rather than re-using fields informally to provide this flexibility) may be all that is needed.

      Is it that the design did not anticipate the growth of the system and so it is now too slow or cumbersome or needs too much active management ? This may need some structural changes to allow it to be optimized and/or automated. It may also need more hamsters, although this shouldn’t normally be the first choice – it is too easy to hide poor design by throwing hardware at the problem but it only works up to a point and, by the time it stops being a practical option, old fashioned optimization is usually harder to do because it has so much existing data and users.

      It is also possible that you will see that the design is wrong, but not be sure how best to fix it because the use of the tool is still evolving. That is fine… at least you know to have another review in a few months or a year when the system is more stable. This may also prompt you to add some monitoring tools to track further developments or uses so that, when you do finally make changes, you have a clearer idea of the impact they will have.

      Reply
    • Doran Mackay
      March 3, 2016 10:32 am

      Yeah, famous last words of people who don’t think for more than a minute and lack imagination as to how their statement can cause problems in the present and future.
      It impugns the character of the person saying it.

      Reply
    • Doran Mackay
      March 3, 2016 10:34 am

      ‘disk space’ is cheap – Yeah, famous last words of people who don’t think for more than a minute and lack imagination as to how their statement can cause problems in the present and future.
      It impugns the character of the person saying it.

      Reposted since for some reason the first post didn’t post the quoted text 😛

      Reply
  • Hello, Brent.
    I’ve run into situation, where shrink seems to be the only option. The DB was about 2TB. 1.5TB of which were two tables. Now the design has changed and we don’t need them. So i drop those tables. And have 2TB base with 1.5TB of empty space, which will not be needed. I decided to try shrinking on test env. It has been running for about 2 hours and is completed for 3% by now, judging by sys.dm_exec_requests.
    Is there any smarter way?
    It seems to me, that it is simplier to move everything from this db to a new one except for those two tables. But i am not sure, how this can be done =/
    Ps: the db has only one filegroup -PRIMARY.

    Reply
    • Yeah, shrinking a database that large can be pretty tough. I would just do it gradually over the next couple of months, doing as much as you can during weekends or low usage windows.

      Reply
      • You mean shrink it in smaller steps. Like, for example, 10GB at a time?
        Sounds reasonable. Thanks Brent!
        And double thanks for lightning fast response)

        Reply
  • This has been an interesting read I must say. I have what I believe is an ‘in the spirit of’ question. I have a database that has 16 db files (not including t-log files). It appears that all of them were created at 1 GB but only three were ever set to grow. They of course make up the loin’s share of the 1.3TB size. All of the files are in the same group, on the same drive, in the same folder. The question is, would there be a similar performance impact as is discussed here if I were to empty and then delete these files. If so would reindexing resolve the impact?

    Reply
  • I inherited a SQL Server database which I migrated to 2012 in the process.
    Following which I ran the nice sp_Blitz script and discovered a number of things to fix.

    Something that wasn’t really obvious however was how painful BizTalk can be.
    BizTalk has 3 databases and is quite the blackbox, one of which grew really huge for the
    kind of activity we have (43G). (Something I had to research)

    After researching, I found out that database should never reach that size and there’s a table in there which has orphan jobs (read rows) and just taking up space.

    Eventually, I managed to get the BizTalk admin to clean up his application.

    That leaves me with a 73% empty data file.

    I feel very ashamed because I want to reclaim that space with a shrinkfile but the idea of having to do this makes me shudder.

    On the bright side, I do have the ability to allocate more disk space to that database instance.

    Reply
    • Stephen Mangiameli
      March 30, 2016 4:33 pm

      I think most people commenting on here have missed Brent’s point. Shrinking every once in a while due to exceptional circumstances isn’t evil. Automating the process because you haven’t diagnosed/fixed the root cause is evil.

      Reclaim your space with a clear conscience my son and go in peace.

      Reply
  • I appreciate the article, and this is a great blog, but if you’re going to rant about why something is bad, you should cover at least the basics of why it’s so bad instead of just linking to other articles. By all means, link to the articles so that readers can get a more in depth understanding, but at least have a basic gist or outline.

    Reply
    • Kay – I am sorry for not repeating what the other articles said. I did you a grave disservice by not giving you a brief, concise, repeated version of what they said. I’ll keep that in mind going forward. Thanks for stopping by!

      Reply
      • Without providing any content, it’s just a rant without being informative or substantive.

        But, then again you took my comment so personally that you could only reply by belittling.

        Peace dude

        Reply
        • Kay, you came to my blog and criticized me for not writing enough – when I’d linked to awesome blog posts with tons of context.

          I like spoons. Spoons are lovely, great shape to them, and way easier to use than chopsticks. But as much as I like spoons, I’m not going to spoon-feed you every single blog post in the entire universe, written in my own hand, when there are others out there that I can just link to.

          Reply
          • Fiduciary Blind Spot
            August 6, 2016 6:22 pm

            My god, Brent, you really are a dick.

          • Given that you didn’t use your real name/email here, it sounds like we’re made for each other.

          • Fiduciary Blind Spot
            August 6, 2016 6:34 pm

            Ah, yes, the traditional fall-back tactic of the blog bully. “Hahah, you didn’t provide me your personal details, so I’m morally superior!”
            I get it, it’s fun to belittle and mock people. It’s pretty much the national pastime these days. But for every person who high-fives you over the latest “sick burn” you laid on some commenter, there are ten who say “What a douche. Not reading any more of this crap.” Mission accomplished, you get to keep your sycophantic minions entertained at the expense of people you neither know nor care anything about, and providing valuable information to the public takes a back seat to stroking your own ego.

          • FBS – and thankfully, there’s one more who says, “I just can’t stop reading, and in fact, I’m going to start leaving comments too.”

            Welcome to the blog readership! Good to have you around. Check out other posts – I’m sure you’ll find plenty of other things I’ve written that get you all fired up. 😉

  • And now you made me do it. You made me reply.

    Because there are MOST CERTAINLY very good reasons and circumstances where you DO want to shrink your database.

    We need to make daily refreshes from PRODUCTION to READ ONLY test systems.
    Ok so there’s one good reason to shrink.

    We need to make weekly refreshes from PRODUCTION to ACCEPTANCE environments. Where only 1 user is active instead of 100. Ok, so there’s one good reason to shrink.

    We have 2600+ databases, most of them between 10 and 10.000 GB. Many of these have 60%-80% free space that NEVER GETS USED inside the TEST and ACCEPTANCE environments.

    Do you know how much cloud based computing power and disk space costs?
    I can tell you, it’s not cheap.

    *WE GET A BONUS BASED ON HOW MUCH DISK SPACE WE CAN KEEP CLEAR*

    Reply
    • Bas – let’s break these down one at a time:

      We need to make daily refreshes from PRODUCTION to READ ONLY test systems.
      Ok so there’s one good reason to shrink.

      So are you saying you shrink your database daily to keep the read-only test systems small? OK, so you’re purposely reducing performance in production in order to make up for insufficient space on a test system. That’s an interesting choice.

      We need to make weekly refreshes from PRODUCTION to ACCEPTANCE environments. Where only 1 user is active instead of 100. Ok, so there’s one good reason to shrink.

      If you’re talking about shrinking the transaction log file on the acceptance environment after you restore it there, why? You had enough space to restore it. What do you need to reclaim space in acceptance for if nobody’s active?

      And if you’re talking about shrinking the transaction log file in production in order to minimize space requirements in acceptance, that means you’re having to do it because the production log file regularly grows larger in size. That means your shrink operations are slowing down end user activity.

      We have 2600+ databases, most of them between 10 and 10.000 GB. Many of these have 60%-80% free space that NEVER GETS USED inside the TEST and ACCEPTANCE environments.

      Now it’s getting even stranger. You’re saying you restore 2600+ databases daily, and shrink them every day? Exactly how many sports tickets, steak dinners, and holiday presents is your SAN admin buying you? Holy cow, that’s a lot of murdered performance.

      Do you know how much cloud based computing power and disk space costs? I can tell you, it’s not cheap.

      OH MY GOD, you’re doing all this in the cloud, where IO is horribly slow? Dude, seriously, put down the database. Time to start doing some reading.

      Reply
    • Fiduciary Blind Spot
      August 6, 2016 6:18 pm

      “Do you know how much cloud based computing power and disk space costs?
      I can tell you, it’s not cheap.”
      But everyone knows that “disk is cheap” so if you’re paying a lot for storage you are doing something wrong. At least that’s what the common opinion here seems to be.

      Reply
      • FBS – generally in the cloud, disk *space* is cheap. Performance is not.

        For example, in Azure Premium Storage, 1TB of space comes with just 5,000 IOPs, which means that even sub-terabyte database servers need to stripe several drives together in order to get the backup & restore speed they need to meet their RPO/RTO. You end up with a lot of space lying around unused anyway.

        Reply
        • Fiduciary Blind Spot
          August 6, 2016 6:26 pm

          Wrong. Disk SPACE is not cheap. Disk DRIVES are cheap. Disk SPACE is as expensive as it needs to be. Your ridiculous oversimplifications and love of belittling people who disagree with you make for some entertaining reading, but the attitude does not convince people that you’re right. And when you keep reiterating the moronic, bone-headed contention that disk space is cheap, when it’s incredibly obviously not true, just loses my respect.

          The ability to maintain useful disk storage capacity in a system is far, far more than the cost of the bytes of storage. You consistently ignore this, and it’s dumb.

          Reply
          • FBS – head on over to http://www.ec2instances.info/ and check out pricing. Solid state storage is literally included for free with most AWS instance types, and hard drives are included with most of the rest.

            For Azure pricing, head to https://azure.microsoft.com/en-us/pricing/details/storage/ and click on Disk Storage. The very best Azure storage offered, a P30, is $122.88 right now for 1TB of space. You would have to rent that for five straight years to match the price of a single core of SQL Server Enterprise Edition.

            Enjoy your learning journey!

          • Fiduciary Blind Spot
            August 6, 2016 6:42 pm

            Aaaaannnd my point flew right over your head. You don’t even realize that you just did exactly what I’ve been saying you do. Ignoring ancillary costs, opportunity costs, and situational costs, and caring only about price-per-byte costs. Which is exactly what I was NOT talking about. This is pointless, you have no clue what I’m talking about.

          • Opportunity costs! You’re right, I totally missed that.

            By focusing on the price of storage, I missed the opportunity to spend my time shrinking data files. I forgot how many people passionately enjoy that pastime. You’re right – I missed that one. You should definitely keep shrinking your data files – it’s a good hobby that will keep you off the streets and away from the cocaine.

            I apologize for not seeing that point sooner.

  • “So are you saying you shrink your database daily to keep the read-only test systems small? OK, so you’re purposely reducing performance in production in order to make up for insufficient space on a test system. That’s an interesting choice.”

    Uhm.. No. I’m saying we have a production database with a 4GB log file and 46GB data file (for instance). In production about 30% of that data file is free space. Many users are working on that and we pro-actively grow the data file.

    Then we need to make a copy of that ‘environment’ after which 1 developer goes and works inside that environment. (Exact copy of that 46GB data file and 4GB log file, we use SIMPLE recovery model mostly as that fits our business needs). This developer will (almost) never do so much work as to fill up that roughly 15GB’s of Free Space. So we make a copy of ‘production’, then we SHRINK the database so that we can reclaim that ‘baked air’ inside the data file (that will never get used). And leave a bit of air for the users (e.g. 2-5%) for working.

    We do not shrink LOG files, no need.

    I’m not saying we restore daily, I never said that. I said we have 2600 databases in TEST and ACCEPTANCE environments. Meaning none of them are ‘production’. If all of those would be a 100% copy of our production then 30% of entire digital footprint would consist of ‘baked air’.

    You never make any distinctions between types of environments, which is THE distinction to make here. You say ‘never shrink’, I say, ‘nonsense, as with everything in our data world, IT DEPENDS’.

    Reply
  • I think that this post was focused on production environments.

    I agree with Bas for test environments. I have also automated my cloning of production databases and there’s definitely a shrink database on the clone and a change to simple recovery model.

    It makes sense in that context.

    I’m pretty sure we all agree that automating shrinks on a clone, not the production databases, before sending them over, is useful.

    Reply
  • Bas de Zwart
    April 19, 2016 1:48 pm

    I think the distinction is very important to make in this article (yeah I’m a bit late to the show sorry :P) considering the harsh way that shrinking databases is presented.

    Reply
  • Hello Brent!

    Your opinioon is same for a non-production environment, with limited space, that receives many databases from production server, with unecessary space using free space that can be used to another databses restores?

    Reply
  • As he noted earlier (and will probably confirm) – the post is largely regarding production. Non-prod environments often require truncating some data and recovering the space.

    Reply
  • Hi Brent,

    My team is genuinely curious as to your thoughts on our specific situation. Your word is gospel around these parts, so I’m hoping you can settle this for us:

    We have a DB that is 615GB with 518GB (about 85%) whitespace. This DB grew uncontrollably a few years ago due to a configuration issue (read: oversight) related to the application that relies on it. Once we discovered the cause and corrected the problem, the data that was collected was purged and it hasn’t grown much in over a year. We expect the DB to grow, but it will likely never reach this size again. Shrinking this DB will (hopefully) be a one-time occurrence.

    We are planning to migrate the DB and upgrade the app server, SQL server, and SQL version. We’re currently debating whether or not to just pull off the band-aid now and shrink it before migration, or to migrate the DB in its entirety and put off the great shrink question for another time.

    Thanks in advance!

    Reply
    • Yeah, a one-time shrink on a database of that size isn’t that big of a deal, but I would make the person who made the configuration oversight do it, and I would have the whole team standing around, laughing and pointing as he made that change. 😀

      Reply
    • Brandon,

      There is a pretty good chance that if your DB is migrated without shrinking, that someone (probably important) will want to say ” … WHEW! it works … let’s just leave well enough alone”.

      There is also a very good chance that it will take longer to move it “un-shrunk” and it will not perform as well as it should on the new infrastructure — if you don’t shrink it.

      I’m very cautious on these things and it has never hurt me to do everything possible to ensure good performance before making any changes — unless of course I do nothing. Never say “never”.

      Perhaps you would consider creating a baseline performance record, shrink it where it sits, user test to ensure operability after it’s shrunk and then compare performance metrics before migrating. Then migrate and do the same thing regarding performance metrics and user operability.

      If it’s shrunk before migration, any testing you do will take less time after migration, there will be less time with any DB operation (user, data operations, etc) and there will be less time to restore if you need to for any reason.

      Best of luck.
      Good luck.

      Reply
  • Wow!
    As a junior DBA myself, while I may not have all of the answers myself, I do have the hang over which tools are labeled “use sparingly”, and which should be labeled as “USE WITH CAUTION!!”. SHRINKDATABASE seems to be part of the latter category, and I find myself scratching my head at the fact that people out there who are responsible for large data sets will use this function flippantly. Scary stuff…
    The only time when I was able to justify its use in my career, was when we removed roughly 40% of the data in a database. I tested, and retested, and retested before letting it loose.

    Brent, as always, thanks for the good and informative read.

    Reply
  • I am definitely not a DBA but am temporarily serving in that capacity. I am currently dealing with a database that is taking up all of the 2TB that have been allocated to it. The file autogrowth has been set to unlimited. At this point I’m guessing that was the wrong decision? Any ideas how to proceed? Seems like some shrinkage is going to be needed. And perhaps limiting the autogrowth? I’m sure the best answer is to hire a DBA (which is in the works) but I’m hoping to resolve this issue sooner.

    Reply
  • So, not one of you ruder jerks ever deletes data, ever, or has disk space issues at an organization that has a budget. Good to know. Here’s an idea. If the feature isn’t something people wanted, why’s it there? And if it doesn’t do what it’s supposed to do, how about getting Microsoft to fix it?

    And if you’re going to rebuild the indexes after deleting say, 200 GBs of data, what difference does it make?

    But hey, guys, if you ever need help with your systems…forget it.

    Reply
    • Steve Mangiameli
      August 6, 2016 10:22 pm

      Chillax bro. If you need to shrink your DB, shrink it. But if you are doing it to the point of creating daily process or automating the task, you have a bigger problem

      Reply
      • Well isn’t that an inadvertent benefit? You realise you have a bigger problem? You now know something you may not have paid attention to before.

        Reply
  • Hi I have a sql 2008 database with 600MB space tops but there is a database which named “common” with 980 GB space and its increasing in period of times
    There is NO way for me to increase my capacity and the drive of that database has 1 TB cap.
    How can I reduce the size of that “common” database to use my software because when it reach the full capacity the software that connected to my sql stops working

    please advice tnx

    Reply
  • I’ll add a comment here and see if this is an acceptable reason for shrinking.
    We run a scanner server software which logs jobs all throughout our office. This audit table has very little useful information in it for historical purposes or reasons of keeping – and yet, because it logs some xml stuff AND has data for 4 years, that single table has grown to 56 GB while having the entire database only 62GB – in our case, no maintenance has been written for this database. So instead, I’m shrinking the database and growing it to a space of about 10 extra than the data and running a script I’ve created to delete this data monthly. In the end, I should never inflate this database too much and also keep it nice and clean..I mind you, this server is new to my responsibility.

    Reply
  • What I got from this on the technical side is neither here nor there, on the human interaction side, I got that the author is way too self opinionated and doesn’t play well with his peers. Sometimes being right counts for nothing in the professional world.

    Reply
  • Sooooo, I am not a DBA by trade, but I do play one on TV occasionally. We have a COTS product that was configured by a predecessor and has dumped loads of historical data to a single DB that has grown to over 1.8TB. A Exec sp_spaceused output reveals the following: database_size=1948490.88MB; unallocated space=1184226.20MB; reserved=778805360KB; data=346456184KB; index_size=431477424KB; unused=871752KB. I am working on the config issue to drastically reduce the data dumps, but would like to get all this space back. I’m thinking this scenario is exactly what a shrink operation with an index rebuild is for. Thoughts? Are there better ways? I have 400GB left on the Data partition where the MDF/NDF files are, will a shrink cause some additional space concerns while in progress? Thanks!

    Reply
  • I just find it kind of weird that you have this over-the-top rant about how horrible it is to shrink database files due to the fragmentation it causes. Meanwhile, you guys also have an article titled Stop Worrying About SQL Fragmentation: https://www.brentozar.com/archive/2012/08/sql-server-index-fragmentation/
    Huh?

    Reply
    • Todd – great question! Because people overly care about both: they care about shrinking, AND they care about fragmentation, which makes it really bad.

      Folks shrink their database thinking they’re going to win an award for freeing up a few GB, and then they turn right around and rebuild their indexes. Guess what SQL Server needs to do? That’s right – grow the file right back out to build a new copy of that index. Bam, you’re right back where you started.

      Reply
  • Simon Holzman
    January 9, 2017 11:50 am

    I think that most people who have read this far appreciate that DBCC SHRINKFILE is a bad thing to use on any sort of regular basis. However, your absolutist “Just Say NO !” perspective does not provide a solution for those rare occasions when there is a valid reason to make the MDFs smaller.

    The problem may be down to terminology: Even though the underlying requirement is to reduce the size of the MDF files, DBCC SHRINKFILE may not be the best way to do this and so being asked “How can I shrink the database files ?” implies the wrong answer. A better question might be “How can I fix the sizes of the database files ?”

    The scenario where I did DBCC SHRINKFILE on Production was where the database was originally created with a single Data File which grew (by the classic 10% each time) to over a Terrabyte.

    Finally, we were advised to split the Data file into 8 files. Adding those files was easy but the only way to get SQL Server to USE those extra files was to shrink the original file to an eighth of its original size… And that took forever (about two months) ! I did run it in Single User mode overnight, but that did not seem to improve the performance.

    I suspect that it would have been much quicker to create a second database and copy the data from the first database to the second one using an automated script to preserve things like Database Triggers, Constraints, Indices, Identity field seeds and increments etc. And then, finally, to backup the new database and restore it over the original one. This may have needed some downtime, but taking the system down for a weekend and getting improved performance afterwards would have been much better than having to wait 2 months for that improvement. (Besides, this change did not actually provide much improvement; this was simply the necessary first step – the “real” fix was moving to all SSDs, which is cheating, but it works).

    This task was a one-off and so, while it would have been nice to have a quicker solution, I might never need it on production again.

    However, we regularly copy our Production database to our Test databases (to ensure that there are few discrepancies) and we remove 95% of the data from those Test databases to minimize the chance of any issues and to improve performance (these are NOT on SSDs). The Backup time, even for compressed backups, seems to relate to the total database size, even if the database is mostly empty and so it would be nice to be able to shrink the Test Databases but there is no point if it takes 2 months to run.

    Again, the automated script to literally copy a database sql object by sql object would probably be the best way to do this.

    So, the follow up questions to the correct initial question is “Do you have or know of a script that will reliably copy any SQL Server database sql object by sql object ? And, if not, could you produce one ?”

    Reply
  • Brent Ozar – I have seen lot of comment from you but I have not seen any reason except fragmentation and that can be easily handled in offline mode. Could you tell me in one line what’s the problem in shrinking offline db?

    Reply
    • Peter – you know, it’s funny, I don’t see a lot of offline databases.

      Reply
      • It’s not the answer. Expecting some genuine answer.

        Reply
        • Hi Peter!

          Brent is on the move today, so I’ll fill in with a genuine question: have you ever tried to shrink an offline database?

          Thanks!

          Reply
          • Yes multiple times. but sorry to say again you guys are diverting and confusing people

          • Peter – so how did it go? Can you describe what happens when you try to shrink an offline database?

            Here’s a hint: you can’t. You can’t perform operations to an offline database.

            You’re bringing the confusion here all on your own.

    • Yeah, you are saying things to experts here that are simply incorrect. Create a _temp DB, take it offline and try to subsequently shrink it – it’s grayed out bud. If you still don’t believe me/us, you need to turn in your DBA hat – if one were issued to you.

      Reply
  • After reading this post and all of the comments, I am left to conclude precisely what I have always suspected might be true: a large percentage of DBAs have no idea what they are doing.

    Reply
  • Oswaldo Lorenzo Betancor
    April 18, 2017 1:05 pm

    Hi Brent,

    First of all, I’m not a DBA specialist. I’m just a bioinformatics enthusiast that has a problem restoring a DB I was given. My main problem is that I don’t know how to restore a DB without shrinking the log file. My case is the following one:

    I have a .bak file from a DB that is 61GB large.
    I have tried to import it into SQL server 2016 with the following command:

    RESTORE DATABASE A
    FROM DISK = N’C:\A.bak’
    WITH MOVE ‘A’ TO ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\B.mdf’,
    MOVE ‘A_log’ TO ‘C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\B_log.ldf’,
    REPLACE,
    RECOVERY,
    STATS = 5

    The generated files A.mdf and A_log.ldf are 195GB and 213GBs, which in my opinion is damn large for a log file. When the RESTORE DATABASE reaches the 100% it gets stuck and it does not finish the restore process. It can be running for hours.

    If I run the following command:

    SELECT r.status, r.command, r.wait_type, r.percent_complete
    FROM sys.dm_exec_requests r
    WHERE r.command like ‘%restore%’ or r.command like ‘%backup%’
    Everytime I try to restore it this way, the RESTORE DATABASE

    I get the following info:

    status = suspended
    command = RESTORE DATABASE
    wait_type = BACKUPTHREAD
    percent_complete = 100

    I don’t have any space limitations, since I have got more than 350GB free in the drive where I’m restoring the database. Which would be the best way of restoring this .bak file? I haven’t got access to the original database anymore, so I don’t know how to restore this database without shrinking the log file, since it always turns into suspension mode when it reaches the 100% of the RESTORING process.

    Thank you very much,

    Reply
    • I have seen this same type of “issue” when the log file is very large, and mostly “empty”. All you can do it wait. Restoring a 60GB log file can take a while, and i could see a log file of 200+GB taking a couple of hours, depending on the speed of your system. How long have you waited?

      Reply
  • We have a table that we use that only keeps 2 years worth of data, so it can delete thousands of rows every night that are more than 2 years old. This allows one of our NDF files to have a lot of available space (the file is 60GB, with over 40GB free). If we don’t shrink it, does the excess size cause any other speed/reading issues?

    Reply
  • DBANightAndDay
    August 8, 2017 7:35 am

    What about shrinking the log file? What are opinions here? I have a 350 gig DB with a 500 gig log. The DB has a nightly full backup, with incrementals and transaction logs throughout the 24 hours.

    Reply
    • Yeah, a 500GB log file sounds problematic. Run sp_Blitz, which will give you more info about any VLF problems you might have, and it’ll also give you a URL on where to go to learn more to fix the problem. (I say that because if you’ve got a 500GB log file, you’ve probably got other basic health issues, and it’s time for a quick check-up.)

      Reply
  • DBANightAndDay
    August 8, 2017 8:05 am

    It is a large vendor product. It’s performance is great. Backup times are quick. I have run a health check and so has my product support vendor. Reorgs and rebuilds run on the weekends. Why nothing pops up is why I’m on this forum. It does a large amount of EDI input (POs) and output (Invoices) each day and that activity is logged. The database has been here since before me and never been purged/ archived with no plans to do that soon. I’m thinking I should shrink the log on the weekend and watch the growth through out the week and see if I can find the major trigger for this. Other ideas?

    Reply
    • Yep, run sp_Blitz. I really put a lot of work into that so that it could give you personalized advice for your own system, way beyond what I can do in a blog post comment. Give that a shot. (When you say nothing pops up, that means you haven’t run sp_Blitz – it has warnings for log file bigger than the database, high number of VLFs, etc.) Hope that helps!

      Reply
  • DBANightAndDay
    August 8, 2017 8:21 am

    Cool – thanks. I have an old version of sp_blitz. I’ll look for a newer one.

    Reply
  • As a *NIX guy this article helped quite a bit. I hate everything Microsoft in the server space and this fast tracked me in the right direction as to why our MSSQL server was running out of space every four months. Now I don’t have to touch it very often, which makes me happy. Just backed up the log files, put it on a schedule, and done like dinner with no need to shrink (for now… did I mention that I don’t trust Microsoft server products?).

    Reply
  • Bruce Lindman
    January 31, 2018 8:58 am

    I just dropped some unused tables and purged some old data. Saved 15% on our DB size. This database is copied to six other environments, and that takes time, network resources, and drive space. Am I not supposed to shrink it?
    Seriously.

    Reply
    • Bruce – so, dumb question: do you not expect the database to grow 15% per year? Isn’t it going to grow right back out again?

      Reply
      • No, we don’t expect it to grow by that much on a yearly basis. And even if we did, we would still benefit from the savings in maintenance window time, network traffic, and drive space now.

        Reply
        • Wow! Okay, great – if your database doesn’t grow 15% in a year, then sure, go for it. Just of course be careful with that advice – as you move to other companies, you won’t probably ever get a job that luxurious. Most of us have to deal with rapidly growing databases, especially in this hyped-up age of Big Data.

          Thanks for stopping by!

          Reply
          • I’ve actually worked with dozens of companies.
            No, I do not regularly shrink databases. My concern is that when a recognized authority such as yourself advises to never shrink databases, the presents potential problems when I suggest that we do so. So a small caveat explaining the rare times when it is appropriate to shrink a database would be a welcome addition to the article.

  • Yes: If you have to shrink your databases frequently then you have an underlying issue that should be addressed and you are just putting a band aid on it.

    But I believe there are acceptable applications of the shrink feature. What about when a process to trim a table fails and goes unnoticed for months and you end up with a 400 GB table that has 95% worthless data? One-time shrinks seem to be ok when they are being used to resolve a rare failure.

    Reply
  • Hi Brent, I just have a question. We recently purge 1,5 millions of record from a table. The Current allocated space for the related Filegroup is 700GB, but the space Used is 300GB. Should I perform a shrink for this particular case in order to reduce the Allocated space /disk size? How can we do it? because it is an .ndf file, I have never shrink an .ndf file. Thanks in advance!

    Reply
  • Recently we archived & truncate 10 TB of data from partition table. Now how I can reclaim disk space? Any option apart from data file shrinking?

    Reply
    • Amoi – sure, when you’re doing partitioning, check out filegroups and files. That way, when you archive an old partition, you can simply drop the old filegroup. Presto, instant space reclamation with zero fragmentation.

      Reply
  • Hi Brent, in my case, my backups are taking way too long. between the 3 db’s that I back up, a full db backup will take 16 hours. I am looking at shrinking the db’s so the backups will run faster. do you suggest something else?

    Reply
    • Phil – yep, because even if they’re 50% empty, an 8-hour-long backup probably isn’t a great idea, either. Start with conventional backup performance tuning, like using compression, writing to a fast target, striping across 4 files, etc.

      Reply
  • serial number vegas pro 15
    September 6, 2018 12:54 pm

    It’s actually a great and useful piece of information. I’m happy that you simply shared this helpful information with us. Please stay us up to date like this. Thank you for sharing.

    Reply
  • It’s October 2018 and I’d like to point-out how “old-School” just about every DBA thread out there is.
    These days, we are running VMware esxi on flash storage, to where, autogrowing a file takes milliseconds and shrinking a database has FAR few repercussions it used to have.

    We are running SQL on serious hardware, the best it’s ever been run on. These issue just no longer exist aside from the most critically I/O intensive operations known to man (think managing Amazon cloud or Azure).

    I run an 871-chassis datacenter. As an Infrastructure Director, we will lose money is these sorts of scare-tactics actually affect customers these days. But, they don’t.
    What DOES affect customers though is the ever-increasing data footprints they have because they have been scared into NOT shrinking their datafiles.
    Many IT DBAs and Infra crew inherit databases which have been poorly, if ever, managed properly. They see a 15GB database with a 45GB log file and think “there must be a reason, I better provision for it”.
    Unbeknownst to them, some employee who does not have SQL training is running ODBC to this db and constantly screws up queries that lock up the server, so the log file grows and grows and never gets reclaimed. Furthermore, this may be something done ONCE, 5 years ago…yet the infra guys have to provision for it?

    No.

    Shrink the hell out of it and let time take it’s course in how to right-size the DB. THEN and only then will you get insight into what processes influence it. With solid-state storage running into the multiple gigabits/sec write-rate, autogrowth is a non-issue, trust me, I see it tens, if not hundreds of times per day.

    This is a dated article (obviously) with dated advice based on “traditional” DBA concepts all borne about because there was a hardware limitation which was very real. This simply does not apply today in anything other than the largest datacenters and most inexpensive home-labs. “Legit” companies paying SQL prices for these licenses are running them on pretty freaking STOUT hardware these days, it’s free performance per dollar to run SSD vs. SAS drives.

    tl;dr:
    shrink to your hearts content, watch files grow for a couple months, right-size the db’s based on your workload.
    If autogrowth happens, you have a lot less places to look as to why they got so big in the first place. Adjust accordingly.

    This should be what *any* DBA truly recommends. It’s a dynamic environment, yet everyone out there wants to make it static.
    No wonder companies are spending wayyy too much on storage they don’t need., and therefore, leaving the data center due to costs involved.

    Reply
    • Vince – glad to hear you’ve found a solution that works well for you, and thanks for sharing.

      Reply
    • Vince – also, just FYI, might want to be aware that bloggers talk to each other.

      When you go around ranting on different blogs on the same topic, it’s a quick way to get a reputation.

      You seem to have a lot to say – maybe time to start your own blog and share your wisdom?

      Reply
  • […] Shrink the database (Yes, I know this is very bad!) […]

    Reply
  • Brent… I have a database that we need to make static, and share out to developers. However it has 73% free space… but both database shrinking and file shrinking does nothing. Here is an example of the Shrinkfile:Current Size 29482984 Minimum Size 288. Why is it not letting me shrink the file/database?

    Reply
  • I have a situation with Database size. During implementation of a new client’s data we import years of data to staging tables. Then that data goes into production tables. Our staging tables store much more data then we actually import but we need to store the data as it comes from the client initially. After the data in saved to prod tables we can remove the staging data. Unfortunately, we need to keep the original data around for 3 months while we finish the implementation. Once we go live we can purge the old staging data. During the staging process the DB can grow more then 2 -3 times its normal running size. Changing the application at this point is a major undertaking. Once we purge the staging data we have a database that is huge (2tb) while normal size would be about 750gb. We have over 400 dbs. Would this be a situation where shrinking would be an acceptable exception?

    Reply
  • […] Before you get to step 1, convince yourself that you really have to shrink your database. I’ll let Brent Ozar (blog|twitter) tell you to Stop Shrinking Your Database Files. Seriously. Now. […]

    Reply
  • The very first link “explaining why” is broken. Although I agree with you 100%, it’s kind of bad for moral.

    Reply
  • Lets talk about log files.

    With autogrow set, our log files grow quite large during especially at the end of month were a lot of transactions are done in short period of time.
    I do have a log backup scheduled every 15 minutes but sometimes it is not enough to catch the overload of data movement.
    So … the log files are growing a lot and even after the log backup and full backups they stay at the same size unless shrinked.
    With over 100 databases in the server and not all log files growing at the same time, sometimes disk space is low.
    What your option on this ??

    Thanks

    Reply
    • If it keeps happening, sounds like you need more disk space.

      Reply
      • Yes … more space is a solution, but is it the better one 🙂
        I am going to put some numbers on my situation.
        To try to explain it better:
        500GB drive
        Lets say I have 5 databases with their log files (log files separate in a drive and data files in another drive)
        All log file will have an average of 50GB (not bad till here) 250GB = 50% of drive
        Now I have a log file that in the first week of the month will grow to 150GB (now my drive is up to 350GB)
        But then goes back to normal size (50GB) because major transactions are no longer done (UNTIL the next month)
        But because the file has grown to 150GB it stays at 150GB even if there is 50GB of data.
        Now at the end of the month I have another log file that will grow to 175GB because of major transactions, this brings my drive to 475GB (close to full not counting the other log files) and like the first log file, after the major transactions done, the log file will go back to 50GB of daily flow.
        I am making this quite simple but in reality I am dealing with drives of 1TB for my log files when in reality I could work with 500-600GB drives just because the logs grow but not at the same time.
        And this is for only one server.
        I have 15 servers to maintain 🙂

        Is shriking log files in this situation acceptable ?

        Reply
        • Gianfranco – if performance doesn’t matter to you or your users, sure. Hope that helps! Take care.

          Reply
          • Performance does matter,
            I shrink the log file during off hours times.
            But I do understand you point that when the log need to grow back again it will impact the performance.

            Thanks for you info and time.
            Great work by the way.

  • Hi, My Database size is 170GB, and the transaction log is 300GB and growing fast, this is because it does a lot of transactions every few minutes collecting data and crunching values.

    Can I shrink the log file only without any major depreciation in performance?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.