Blog

I had sworn to myself that if I saw one more helpful article about how to shrink your SQL Server database files with DBCC SHRINKFILE or how to back up your log with TRUNCATE_ONLY, I was going to write a rant about it.

Epic Advice Fail

Epic Advice Fail

SQL Server Magazine just tweeted about their latest article, a reader-submitted solution on how to shrink your database files with ease.

AAAAAAAAAAARGH.

To make matters worse, this particular article is by a Microsoft DBA and doesn’t include a single word about the problems involved with shrinking your database files.

AAAAAAAAAAARGH.

Don’t shrink your database files just to free up drive space.  Stop.  It’s an unbelievably, disgustingly, repulsively bad idea.  Your disk drive space is for files, not for ornamentation.  You don’t get bonused based on the amount of free space on your drives.  Empty files don’t take longer to back up.  And so help me, if you find yourself shrinking databases so often that you have to automate it, you need to cut up your DBA card and reconsider your choice of career.

I’m not going to reinvent the wheel by telling you why.  Instead, I’m going to point to half a dozen posts explaining why this advice is just a flat out epic fail:

I feel bad going nuclear on this article, but I’m not just venting about the author.  This kind of advice shouldn’t clear any kind of SQL Server editorial team either.

Coming next month: “How to Reduce Your Backup Times with the Truncate Table Command!”

Learn More About Why Your SQL Server is Slow

sp_Blitz®: Free SQL Server Health Check – You’ve inherited a SQL Server from somebody, and you have no idea why it’s slow. sp_Blitz® gives you a prioritized list of health and performance issues, plus gives you URLs for more details about each issue.

Our Free 6-Month DBA Training Plan – Every Wednesday, you get an email with our favorite free SQL Server training resources. We start at backups and work our way up to performance tuning.

SQL Critical Care® – Don’t have time to learn the hard way? We’re here to help with our quick, easy day process that gets to the root cause of your database health and performance pains. Contact us for a free 30-minute sales consultation.

↑ Back to top
  1. You forgot this one, where I complained about the exact same thing you do: people who show people how to shrink their files but conveniently forget to tell them why it’s bad and why it should be reserved for emergencies only:

    Oh, the horror! Please stop telling people they should rhink their log files!

    http://sqlblog.com/blogs/aaron_bertrand/archive/2009/07/27/oh-the-horror-please-stop-telling-people-they-should-shrink-their-log-files.aspx

    • There is nothing wrong with a shrink provided you know what it does and mitigate the effects. If you shrink – rebuild the indexes afterwards. Fragmentation problem solved. This isn’t rocket science…
      People who say don’t shrink because it fragments the table are really making hay not of nothing. It’s not a big deal.

      • Zack – so if you shrink the database and remove all free space, and then you rebuild the indexes afterwards, do you understand that that re-grows the data file right out to make space for the newly created index?

        • I agree with Zack. Say you’ve got a file that’s incredibly oversized and has 100 gb of free space. You shrink it and rebuild indexes which causes it to grow by another 5 gb. Who cares? I just reclaimed 95 gb and possibly prevented a wake-up call in the middle of the night due to low free space on the disk.

          • Chuck – I’d suggest looking at the root cause of why the file was oversized. Perhaps there’s a staging process, for example, that regularly loads and transforms large amounts of data. Temporarily freeing up the space won’t fix it – and in fact, can make things worse if you think the space problem has been fixed when it’s not.

          • If you know your application, and know that the file will never grow anywhere near that large again, it doesn’t make any difference why it was that large in the first place. The point is it’s oversized, wasting space, and that’s a bad thing.

          • Chuck – if you knew your application and controlled the database, then it wouldn’t get 100GB too large in the first place. ZING.

          • in a perfect world every data file would consume every spare % of disk and usage would be monitored completely by DBA’s but this is rarely the case, at least in my experience. mgmt want failsafes, they want that 20% and they want the infra guys to watch that and obviously the dba’s to capacity plan up to this point.

            i spent the first 3 years of my DBA career with this “never ever shrink” philosophy and if i hadn’t had it drilled into my head so much i would have perhaps had a rethink – i think that it’s a little too much of a catchall phrase amongst the DBA community. my two cents

          • Agree completely Darrell. I think this is one of those rules that is a good rule of thumb, but has exceptions.

          • I am seeing discussion of the risks of a SHRINKDATABASE, and I see recent comments. Has there been developments in SS2014 that mitigate the risks of a shrink? I have a database that is being split in two pieces, and the original is 500Gb. I will have to prune some old pieces out, and move a lot to another. It seems to me that is a legitimate reason for a shrink.

          • I meant, physical disks, so you CAN’T expand them – my bad.

        • How to reduce the transaction log file size without shrinking in sql server?

          • Allan – why do you need to reduce the transaction log size? How big is the database, how big is the log file, and what’s the size of the largest table in the database?

          • The database size is 250GB and the Log file is 60GB. The largest table size is 8GB.

          • Allan – I’d leave it. It’s not unusual for a log file to be 20-25% the size of the database.

          • But I don’t have much disk space left because of that. I need the space in that disk. I have a disk space issue on that particular server and there is not much I can do about that. I need to free at least 30GB in that specific drive. Any thoughts?

          • Get more disk space. Disk space is not that expensive, and if you have even just one out-of-control transaction on that server, you’ll fill up that 60GB anyway.

          • I think you have bigger fish to fry if you are tight on disk. I’m assuming these are physical disks, so you can expand them? If the trans log grew to that size, it will likely do so again. Perhaps it did so due to maintenance, etc.

            As an aside, being 60GB you might be well suited to check the VLF count to see how fragmented the trans log is. That can have all sorts of adverse impacts (due to the default growth rates, which suck). I have had 60GB trans logs with over 1,000 VLF’s; after properly adjusting them (8GB VLF’s) and reducing their size to, say, 32GB/40GB – they seemed to stabilize. Google Kimberly Tripps article on the topic of VLF’s.

          • Brent : “Allan – I’d leave it. It’s not unusual for a log file to be 20-25% the size of the database.”

            And if the log file is 110% of the database?

          • Then you officially suck as a database administrator. Turn in your DBA card, and report to the nearest authorities.

            No, seriously, that’s the time where you fix the root cause first, and then shrink the log file once – and exactly once.

      • Zack the point of this entire post is that database shrinkage should be the exception – the rare exception – and not something you are doing frequently or automating. If you are shrinking database files frequently, then you have bigger problems than your shrink operations or your applications truly need the space and you need to get the bean counters to release a few so you can grow out your disk accordingly. You are literally wasting time cleaning up the symptoms rather than tackling the problem.

        It would be like constantly bailing out a toilet tank because it’s always overflowing, completely ignoring the fact that the fill valve is shot and all you need to do is just fix it or get a bigger tank because you actually need the extra water.

        You have better bigger more important things to do than reclaim space.

        • So, I have got the scenario in which I need to find a way to automate SHRINKFILE, or at least, to automate the effect.

          I have a 1.2TB database occupying roughly 1.5TB of diskspace on Production across 8 Data and 8 Index files. I also have three non-Production databases; Training, Development and Test which reside together on a separate DB Server.

          We copy from Production to all three other databases roughly once every two months to limit drift and allow us to test upgrades. The Test database is left functionally unchanged but the other two databases have a subset of their data “anonymized” and have most of the rest deleted using an automated script I wrote that copies the data to be kept to a temporary table, truncates the original, and then copies the data back again.

          The problem is that we have limited disk space on that server (roughly enough for 2.5 copies of Production) and we cannot get more.

          Therefore, we restore the Test and Training databases and then I run my anonymizer script against Training. Finally, I have to shrink the 8 data files and 8 index files assigned to the Training database so that there is space for the Development database. The index files are each going from 70GB to 12GB and take roughly 30 minutes and the data files are each going from 148GB to 24GB (roughly 70-80% filled) and take something over 6 hours each (I have no idea why it takes so much longer). This is sucking up my weekends and I am not even trying to fix any fragmentation caused since these are very underused normally.

          Is there a way to automate this ? Perhaps moving the data and indices to new data and index files that are the right size in the first place.

          I had thought that doing a backup and restoring the backup would work but our hosting service says that the backup is storing the physical layout of the data and will recreate that verbatim.

          However, I found a bug a little while ago when a sub-query was pulling a single record but the Production and Test versions were pulling different records. I worked out that the Test version had to be storing the data in a different physical order on the disk from Production. (FWIW, I fixed the bug by building an index on the column being returned since the query plan automatically used the Index rather than the heap and so this imposed a consistent sort order – we had no control over the application code and so I couldn’t just add a ORDER BY clause).

          This seems to suggest that the data and/or indexes may actually be restored into logical order although presumably allocating the full specified size. If this were true, the DBCC SHRINKFILE(xxx, 12000, TRUNCATEONLY) command would change the allocated size, but probably in under 5 minutes.

          Unfortunately, I cannot test this at the moment and, if this will not resolve the problem, I’ll need to work out how to add another filegroup, move the remaining data from the old one to the new one and then remove the original filegroup, which will hopefully be faster than the current SHRINKFILE I am running .

          Any thoughts (other than tearing up my DBA Card) ?

          • Simon – if you can’t test things, you’ve got a bigger problem. You need space for development and testing. It’s time to have an honest talk with management about the space and capacity you need in order to do your job.

          • I couldn’t test the backup/restore when I wrote it because I was waiting for the first (of the 8) Data Files to shrink from 148GB down to 24GB… it finally finished a couple of hours later and I tried doing the backup/restore.

            Sure enough, a nice small backup file but, sadly, sure enough the restore not only still had the full disk space allocated but the next SHRINK took just as long as the previous one.

            I think the only choice is to copy just the data that I want to keep from Production (or, more likely, the Report Server, which is a straight copy of Production each day and is very underused) to a secondary database on the same Server and then I will be able to back that up and restore it as I like. Even on Production, doing this with copious (nolock) hints outside production hours should be fast enough to let me get some sleep on weekends !

            Thanks,

    • How to reduce the transaction log file size without shrinking in sql server?

    • Don’t shoot me. I have deleted 1000s of tables, unused indexes and about 80% of the data in the remaining tables. Therefore I am going to shrink the database down before I archive it to tape and cloud.

  2. Hey it could be worse. The article could be telling people how to free up space by deleting the transaction log.

  3. Sigh. That’s all I can say. Time to make my new moniker a reality, you all know my ‘involuntary DBA’ – now I give you ‘incompetent DBA’. Sigh.

  4. Unbelievable that editorial control wasn’t exercised there.

  5. These articles make me shrink in terror! Maybe in the next edition Microsoft can deprecate the shrinkfile capability.

  6. Dude – I couldn’t even deprecate auto-shrink, and I used to own the Storage Engine before I left MS. Don’t hold your breath for that one.

  7. The real problem is not the *shrink itself*, but HOW the SQL shrink command works (move the very last single page to the very first available open slot, repeat), causing table and index fragmentation.

    I still think someone needs to write a “smart shrink” utility that attempts to free space from the end of a SQL data file while *keeping it in the proper page order*, thus avoiding fragmentation (as much as possible). Not sure if this could be a third-party utility to have access to that level of internal data file detail, or if it would have to come from the MS SQL team.

    Sure, still might have some potential for mis-use, or over-use, but at least if we do run into that occasional unexpected disk space crunch, we’d have an option that doesn’t obliterate our performance. (I know, proper planning, blah blah blah. Sometimes stuff happens.)

    • Brad – no, that doesn’t completely solve the problem either. When you shrink a file, it’s usually because the database NEEDED that space at some point. When the database needs it again, you incur a serious performance penalty while the OS reallocates the disk space. Instant File Initialization mitigates that for data files when it’s set up correctly, but not for log files. Even with IFI, the file’s newly grown portion is rarely contiguous with the rest of the file. Presto, fragmentation. Combine this with SQL Server’s default growth settings of 1mb for data files, and you’ve got a fragmentation disaster.

      Since there’s no way to fix fragmented data files while the database is online (only with backup/restore or disk defrag utilities while SQL is offline, and those options still suck) the DBA shot himself in the foot.

      • Point taken, Brent, and I’m aware of all those issues.

        Still doesn’t mean that we don’t encounter this need all the time: the application team that finally purges the 10-year old data we’ve been bugging them about; the rogue developer that tries to copy a 300m row table to “see if he can reduce blocking”, the 700GB database that needs to be chopped down to 50GB after restoring to the DEV environment, the pager going off at 2:30 in the morning because there is only 9.9MB free on your data drive; the analysis server that alternates huge jobs between two huge databases, but with only enough “working space” to grow one at a time; the “archive only” database that has 250GB of internal free space, and the business unit that gets charged by the GB for storage, etc…

        None of these are “good”, certainly, but the fact is that these are daily realities, at least in my environment. If we had a “better” shrink utility, we *at least* wouldn’t have to deal with the shrink-increases-fragmentation issue, although the other items you mention are still something to remain aware of.

        I guess I’m really advocating (mostly) the same thing you are: DBA knowledge and awareness of the problems/issues with shrinking, and a careful and selective application of it only when needed. Absolutely, the article you link does *not* contain the level of awareness and warning that should be included.

        As much as I respect Paul R, his “never, ever, ever shrink data files, no matter what, ever” philosophy, while laudable, is simply not practical. What are you going to do, tell your paying client, “Sorry, I know you are running out of space for your database BigData, but I can’t run a shrink on BigData_Archive to keep your server up and running. The drive will fill up at 2pm, and your website will be down. Thank you for your business, your invoice is in the mail.”

        • Brad – again, if you’re doing daily shrinks, I would argue that you need to address the problem, not the symptom.

          • Sure, I’ll agree to that. How about, “Use data shrinks only in highly exceptional circumstances, only after considering other better options, and only with full awareness of the problems it can cause.”

  8. on editorial control… I just took a look at the masthead and of about 50 names, I count 26 names alongside titles that include some form of “editor” (one of those names appears as an author of one of the posts you list).

    I don’t know much about the journalism industry but it’s hard to figure out who dropped the ball.

  9. Brad

    In your Scenario:

    “the application team that finally purges the 10-year old data we’ve been bugging them about; the rogue developer that tries to copy a 300m row table to “see if he can reduce blocking”, the 700GB database that needs to be chopped down to 50GB after restoring to the DEV environment…”

    In (most) situations like this something “major” has happened to reduce the used database space so much.

    If you have a database that goes from 700 GB of data down to 50 GB of data wouldn’t you rather create a shiny new database (sized appropriately) and just transfer the 50 GB of data there rather then delete/remove 650 GB of data and shrink (and fragment) your database?

  10. I don’t say never, ever to do a shrink – I say don’t do *autoshrink* ever and try as hard as you can to avoid doing a shrink, unless there’s an overriding reason to do it (e.g. deleting 500GB from a 1TB database and then making it read-only forever). You can do a shrink by rebuilding indexes into a new filegroup, and then dropping the old one – or if you have to do a shrink, make sure to tidy up afterwards using alter index reorganize.

    My blog post at http://www.sqlskills.com/BLOGS/PAUL/post/Why-you-should-not-shrink-your-data-files.aspx clearly says all of this – I think you maybe misread autoshrink for shrink.

    It’s just not possible to write a smart-shrink outside of the Engine. I had a design for a smart shrink for 2005 but it was canned as not being important enough. It won’t ever happen IMHO.

    Thanks

  11. I’m glad that you posted this. Being a relatively new DBA, I was unaware of what I was doing. I perform reorganize index operations on one of my production servers every week and my largest database always experiences a lot of transaction log growth. To mitigate the problem, I had to constantly shrink the log file and back up with truncate only to save disk space. This was necessary because my company didn’t have any other place to store my SQL backups, so they were being stored locally and I risked consuming all of the disk space. Luckily, we just bought a SAN and I’ve got all the disk space I need. So, I’ll let my log file grow from now on and I’ll stop truncating.

  12. As an Oracle DBA, resizing tablespaces is something that is normal to do as an administrator. And I think managing the allocated space for a database is something that should be done.

    I’m not suggesting that this should be a normal, automatic process… I would just expect there to be a better tool for resizing allocation for a data file in case it had to be managed. For example, I began to support SQL Server at a company I worked for in the past. A database was in Full recovery model but was not being backed up. The transaction log file grew to be about 100 times the size of the database. What would you do in this case, if shrinking the file is such a bad practice?

    • Pepe – the problem there isn’t the shrink, it’s NOT BACKING UP YOUR DATABASE, hahaha. First things first – start backing up the database regularly, then shrink the log file once and be done with it. You don’t need a fancypants automated tool to fix the shrinking problem there. You need to be more aware of the very basics, like making sure the databases are being backed up. Concentrating on the shrink part is the symptom – you need to fix the underlying disease.

    • Even if you are running proper daily “full backups”, when a database is in “full recovery mode,” it needs frequent transaction log backups performed. That, or switch it to Simple recovery mode.

      • Agreed. And if you’re doing regular transaction log backups, don’t shrink your transaction log file just to free up space on the drive. Your LDF files are growing because the SQL Server needs the space, and it’ll just keep regrowing the files again and incurring a nasty performance penalty each time.

        • Especially if the log growth is set to the horrible default (10%). That means each time it grows, it costs a little bit more. And all transactions have to wait for it to finish. Enter Jeopardy! theme music.

    • Agent_Pepe, nobody is saying that there aren’t anomalies and that you should never, ever, shrink any file. Your case was definitely an issue, but once the issue is corrected (e.g. start backing up the database and the log correctly, and then adjust the file to an appropriate size), the problem goes away. The main problems highlighted are:

      1. shrinking is often a knee-jerk reaction to something that is actually normally expected of the database (to grow!), and trying to shrink it to gain disk space back temporarily does more harm than good. If your data file is outgrowing the disk you’ve allocated, the solution is simple: get more disk. Obviously there are situations where it may be necessary (including those mentioned by Brad), but shrinking a data file should be an exception and not the rule.

      2. databases should NEVER be set to auto-shrink (I am not sure why that capability even still exists today; too bad Paul didn’t have more pull back in the day)… this also goes for your own home-grown jobs that automate file shrinking, not just the stock checkbox on the database properties dialog.

  13. Wow, usually I enjoy reading your perspective on SQL, but it sounds like you need to take a breather over this.

    In an ideal world, all databases should be backed up normally and this shouldn’t happen. But haven’t you ever been pulled into a situation where a DBA that was previously responsible for a database hasn’t been doing their job? And you told me to backup the database, then shrink the log file. I thought this whole article was about how shrinking files was such a terrible idea.

    I think we agree on the idea that this should not be the solution to a problem. But this is something that has to be done in certain situations. And it doesn’t always work. That is why I think that shrinking data files in SQL Server should go much more smoothly than it currently does.

    • Again, as has been stated several times, NOBODY IS SAYING “NEVER SHRINK FILES!” – they are saying, don’t do this automatically, and only do it when you have an emergency of some kind. The SQLMag article that spurred Brent’s post in the first place suggested implementing a stored procedure that went through all of your databases and ran shrinks against the ones that his logic deemed “could use shrinking.” This is just ludicrous.

      • Understood. Then maybe the title of the article and most of the contents should be rephrased.

        I just haven’t always had luck with the Shrink utility when it HAS to be done, leading me to believe that it doesn’t work like it should. I’m not trying to sound like a jerk here… I always appreciate the material on this blog. I just think that Microsoft should make this work a little better than it does.

    • Right, I’m with Aaron. There’s nothing wrong with shrinking files once. *Once*, when you take over the database if it’s been horribly mismanaged.

      But building an article about how to automate shrinks more often and more regularly, that’s a problem, especially without any mention of the aftereffects. It’s like saying, “Here’s how to cut all the trees down in your yard as fast as possible” without talking about the basics of making sure the tree doesn’t fall on your house, or talking about when you should and shouldn’t cut trees down to begin with.

      • In fact – I would even go as far as to say that in the situation Pepe describes it is better to shrink that log file… (ONCE).. Get into a backup strategy that actually gives you the benefit of being in full recovery mode, perform your log backups, get the used size of the transaction log down.

        Then shrink the log file to the smallest size possible and then grow it ONCE to the size you believe it needs to be based on your recovery strategy, data volumes and log backup frequency.

        Doing this will help you control your Virtual Log Files which could have a performance impact.

        A log file that is grown in small chunks frequently or that spent its life in a shrink-grow-shrink-grow relationship with an absent minded DBA will tend to have a lot of VLFs and this may impact performance. I guess less so in more recent versions but it is still a practice I follow.

        98% of the time when I see shrink being used or suggested it is for dead wrong scenarios. All too often it is after someone says backup your log with no_truncate to clean up the log file from a Full Recovery mode database with no log backups… The right answer there is to pick a recovery model that matches your recovery needs and then do backups that match.

        Funny timing, Brent… I just got done banging my head on the desk here due to a new instance I inherited from our sysadmin, involuntary DBA…

  14. Pingback: How to Do BACKUP LOG WITH TRUNCATE_ONLY in SQL Server 2008 | Brent Ozar - SQL Server DBA

  15. Just bumped into this. Love it!! You forgot to mention the fragmentation that is introduced by shrinking the files. That shrink command isn’t moving extents around, it is moving pages. Try running your style of a fragmentation report before and after a file shrink and see what you get.

  16. What would webmasters do if database size is limited in hosting services? For example: 120MB before shrinking and 18MB after.

    • A-K – great question! Typically hosting companies will set the growth and max size numbers for a database so that it can’t outgrow your hosting plan limits. Your best bet is to pre-grow the data file to your full size limit and then you’ll never have to worry about growing or shrinking.

  17. Let’s not forget, however that shrinkdb is a usefull tool despite the fact that it is often abused. I use it whenever I migrate databases from production to qa and dev less they don’t fit. I’ve also used it after partitioning to reduce the size of backups.

    Also it seems that this whole concept of auto growing files until you fill the disk and you run out of room is strange to oracle. The oracle guys at work don’t understand that since best practices seem to be to allocate all the space that you possibly want for the file and lock it down from autogrowth whatsoever. Obviously this would mean you woulodnt shrink either. I’ve used this strategy often, it’s ideal for tempdb.

    • Sean – I agree with you on most of it, but I’m curious about this one:

      “I’ve also used it after partitioning to reduce the size of backups.”

      I’m curious – do you use any backup compression products like LiteSpeed? If so, they don’t back up empty space in the database, so it won’t matter whether the database is shrunk or not – the backup will still be the same size. I haven’t done a native backup of a large database in quite a while, so I’m not even sure if this is true with native backups either.

      • Brent, take a look at my resonse to GilaMonster’s post below. In this case we are talking about optimizing a database for network copying and restoring remotley. After some tables were ‘partitioned’ (i.e. relocated) there was space in the database left on mixed data pages. Shrinkdb does fix this and will make for a smaller backup.

        I cant get my infrastructure guy to spring for litespeed and 2008 is still a year a way for us, so no I dont the benifit of streaming compression during the backups. Instead there are ugly scripts that gunzip them after the backups are complete. =(

    • How does shrinking reduce the size of backups? A backup only backs up used space, free space within the data file is not included in the backups.

      • I knew this question was next. =) In my case, I had migrated some tables to a new database, replaced them with a view that points to the new db and then shrinkdb. It was useful in this case to rearrange the data pages after large deletes. This turned into a sort custom partitioning scheme on 500 gb db that is backed up and copied to a test environment periodically. This provided a smaller db footprint, and yes a smalller backup that was easier to copy over the network.

        You are correct in that a backup does not include the empty space at the end of the file. That is true. But after large deletes where free space is now scattered over mixed data pages, only a shrindb will free this space. And without it , yes your backup is larger.

        • Mixed data pages or mixed extents?

          It’s not just space at the end of the file that will be left out, it’s unused extents anywhere in the file. Only way you’ll get space saving in a backup after a shrink is if you have lots and lots and lots of partially allocated extents (blocks of 8 pages with < 8 pages used by the table).

          Between the shrink and the index rebuild that you need to do after, you're probably spending more time doing the backup than if you just backed up normally.

  18. Sean – what do you mean by ‘mixed data pages’? Shrink does not *ever* change the contents of pages that it’s moving – except in the single case in 2005 onwards where it will compact LOB column storage space potentially. It will not reclaim space from data pages that have free space on them. Trust me on this – I wrote the DBCC code.

    Do you mean that it reclaims empty pages from extents, thus compacting the table/index? Yes, it will do this, but at the expense of causing horrible index fragmentation.

    To address another point – a backup not only does not include empty space at the end of the file, it also does not include unallocated extents spread throughout the file. An extent is only included in a backup if one or more pages in the extent are allocated.

    Thanks

  19. Hi Brent

    OK, I’m convinced. I’ve removed all the Shrink database statements from all my maintenance plans. And when I check the DMV, I see that the indexes are indeed horribly fragmented.

    Now I have a problem. My senior DBA says I should leave well alone – no-one is complaining and if it ain’t broke, I shouldn’t try to fix it. Plus unless Microsoft says that Shrink Database is a bad idea, he doesn’t believe it. Can you point me to a MSDN source to back yourself up?

  20. Pingback: Resolving Very Large MSDB – JohnSterrett.com

  21. Pingback: Making the best use out of Mainenance Plans on SQL Server « Raymund Macaalay's Dev Blog

  22. Pingback: Minimize your TFS Backup size « FreeToDev

  23. I just had another vendor that I work with suggest that I really should have a sunday maintenance plan that will go out there and shrink the database.

    That being said. I am in the process of shrinking the database. Howerver, it’s for the only valid reason I know. To reclaim space that will never be used again. I have enabled data compression on the tables and have freed up 84 GB…120 GB to 40 GB being used. However, now I’m wondering how long it’s going to take to finish the shrink process currently at the 3.5 hour mark. Thank goodness this is a test system. I saw a similar post out there that took someone else 1 day 20 hours and some odd minutes.

  24. Pingback: SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance « Journey to SQL Authority with Pinal Dave

  25. Brent, when are you going to write the “How to Reduce Your Backup Times with the Truncate Table Command!”. I bet a lot of people will love to read that one 😀

  26. I routinely have to shrink my database in order to sign-in to a specific program [i.e. Oracle Primavera P6]. So, I was scouring the internet to find ways to automate this process when I ran across your article. If I don’t shrink my database every two to three days, I literally can’t sign in to the program. Now, I’m very confused.

    • Carter – what’s the problem that happens when you can’t sign in? I’m guessing you’ve got the database in full recovery mode and you’re not doing transaction log backups, so the log file fills up and the database stops. In that case, you’d either want to start doing transaction log backups or put the database in simple mode.

  27. My knowledge of SQL Server is scant. However, I am working on a data warehouse where all the data will be replaced every night (long story, not relevent to this topic).

    I asked our dba if this will cause the database to bloat, like MS Access would. He suggested that it would, in which case shrinking the database would have merit.

    What is your opinion on this matter?

    • Hi, Dan. That’s a totally valid question, but the data churn rate won’t affect this issue. If you need X amount of space every night, then you want to pre-grow the data and log files to the correct size, then have them grow in appropriately sized increments, and leave ’em there. The bigger problem for SQL Server is the repeated growth and shrink.

  28. It seems to me the SQL db structure should be re-engineered to self grow and shrink properly/efficently. Perhaps in another couple decades Microsoft will figure that out..

  29. Here’s a thought if shrinking database files is such a bad idea, then microsoft should get rid of the shrinkfile command
    There are best practices and then there is real world.

    KPB

    • Kim – that’s like saying since people keep sniffing glue, we should get rid of glue. Glue’s a good tool if you use it for the right things, just like shrinking files is.

  30. I respectfully disagree.
    Valid scenarios exist where it is desirable to reclaim disk space, perhaps when a database has been purged of a large amount of data then it is silly to still use all that expensive enterprize storage under a empty database file.

    • Juel – absolutely, if it’s a one-time thing like purging data once, that makes sense. But if it’s an annual purge (say a data warehouse that only keeps 5 years of data, and you delete the 6th once per year) then why shrink it every year – when you know the data will just grow back?

  31. how about databeses stored on a big SAN? fragmentation is not a real problem…am i right?

  32. Ok reason i say AM I RIGHT is becasue i am not sure…but…as far as i know on a SAN data is spread all over disks and we don’t really care where the data blocks are. Sometime we find blocks of datafiles and blocks of logfiles in the same spindle. Brent…this is all i know about data on SANs.

  33. Hi Brent… I agree the shrinkfile is abused and inappropriately recommended.

    I am currently resizing my log files to reduce internal log fragmentation. During the process I do use shrinkfile to “purge” the fragmented VLFs and then grow the file back to an appropriate size. Would you consider this an appropriate use? If not are there other options to this sort of maintenance?

    • Nick – yep, that’s a great one-time use for shrinking. Of course, you don’t want to regularly automate this – if you find yourself doing it repeatedly, size the logs correctly from the start, and leave ’em there. Hope that helps!

  34. Dear Mr Brent,
    When I shrink my database, my computer run out of memory and SQL shrink error, after that, I shrink my DB again and it run success. But after that, when my application connect to this DB, it run too slow, and my DB grown very fast.
    Can I rebuild index on this DB?

    I have another DB and I compress both table and index on this DB. So, can I rebuild index on this DB? Is my DB grown too big again after I rebuild index?

    Thanks

  35. If you are strictly talking production, I agree. However, we have roughly six non-production replicas of production and there is a shit-ton of logging data that accounts for about 80% of the size of the database. Therefore I do find shrinking a fact of life in order to conserve some space and perhaps improve the performance of those environments on lesser hardware.

  36. I completely agree, I am not a fan of auto shrink. That being said, I’m currently shrinking an entire database right now. The situation is that the database is 1.9TB and when converting from 2005 to 2008 we are using page compression for the majority of the database. We will shrink after the database has been compressed and then we will reindex everything that we break in the same stent. This is a situation I find shrink useful for. overall the biggest fight I’ve had is developer pushed shrinks and then complaints about indexes (unique situation). This has been resolved and the healing can begin.

  37. Hello, I’m new to Sql Server and am very intrigued by all I’ve read here. I’m building an app that uses Sql Server Compact 3.5 as its db engine. Because of the 4GB size limitation, I had planned to periodically purge older data, followed by the shrink operation described here: http://msdn.microsoft.com/en-us/library/ms171752(v=sql.100).aspx.

    Excuse my ignorance, but does everything I’ve read here (particularly the index fragmentation effect of shrinking) also apply to SqlServerCE? If so, since there is no ‘defrag’ method available for SqlServerCE, what practical options exist to manage the database size?

    • Vic – hmmm, I’ll be honest – I haven’t done any SQL CE work in almost a decade. Is there a reason you’re not using Express Edition instead?

      • Thanks for your reply, Brent.
        Our application is designed to interface with and collect data from an external USB device and may be run on mobile platforms. Because off this,Sql CE seemed approriate, being lightweight and supporting a simple private deployment. I mentioned the 4GB DB limit, but in reality we’re looking to keep it well under 200MB by purging old data as it becomes stale.

        Since Sql CE is a file-based DB that doesn’t support clustered indexes, I’m in doubt as to whether the above discussion really applies here. At the same time, performance degradation is a top concern. Hoping maybe Paul or others could shed some light?

  38. I just had a visit from a supplier that created three user databases on my server. It’s really just small change, each db is hardly a GB, but because of wrong settings – that I should have kept an eye on, but didn’t – the log blew up to almost 1 GB(only 14 MB used). Three times the size of the datafile. I do have enough room on the disk for the transaction log (disk is 3 GB), but I like to keep my databases properly sized.

    I’ll probably leave it this way, but would this be a situation where you could use shrinking? Or is it still better to setup new datafiles that are properly sized and copy the data? Btw, I never shrink databases, and I do have weekly maintenance tasks to rebuild and/or reorganize indexes for all user databases.

  39. I have an interesting situation regarding shrinking a data file. I ran into a situation where we hit the maximum value on our primary key field, which was an auto-incrementing value of the int datatype. To address the issue, I created a second table with the same basic definition except now the primary key field is a bigint. I then issued an insert into … select command to move the data from old table to new table, using with tablock to avoid growing the transaction log (the database is in simple recovery mode). Once I got the data in the new table, I was able to drop the old table. the size of the old table was 37GB. So, in the process of temporarily duplicating the old table, I added an extra 37GB to the data file. Once the new table was in production, I dropped the old table but this, of course, did not release the 37GB back to the OS. While I realize the bigger mdf file doesn’t affect backups I would like to release the 37GB back to the OS, as I don’t necessarily have an infinite amount of space on disk and I have other databases using that disk that are also growing.

    What’s the best practice here?

    • Dennis – obviously the best practice would have been to avoid the problem initially, but it’s too late for that now. 😉 When repairing something, we sometimes have to make a one time exception.

  40. Pingback: Stop Shrinking Your Database Files. Seriously. Now. | Brent Ozar Unlimited | TechnoSwimmer | Jesus Christ | Loong | IT

  41. Pingback: Reduce the size of the SharePoint content database without shrinking it. | Sharepointkaos

  42. Pingback: Building The Redneck Treadmill Desk - RealSQLGuy

  43. Pingback: Building The Redneck Treadmill Desk - SQL Server - SQL Server - Toad World

  44. In this article it states shrinking database will resolved index fragmentation which was covered today in webex seminar
    http://rusanu.com/2010/03/31/the-bizzaro-guide-to-sql-server-performance/
    In this article I beleive iam getting a different message.

    Is it the Version of SQL that makes the differnce or am I missing this all together?

  45. Pingback: Sometimes It’s A Dirty Job – #TSQL2sDAY #50

  46. HI Brent,

    I have a large database that has been ignored for a long time and one log table has grown to consume 95% of the space used by the datafile which is 120GB. After asking I have found that we only need to keep the data for 3 months. Removing the old data has given me almost 100GB of free space. I have implemented a process in my maintenance plan to trim this log file every week.

    Looking at my backups I see they are only 10GB (Compressed backups). Would I be correct in assuming the cleanest and quickest (least amount of downtime) course of action would be to drop the database and then restore the backup file and resize appropriately?

    • James – this is kinda beyond the scope of something I can answer quickly in a blog post comment, but no, restoring from backup doesn’t sound like a good idea. It sounds like you’ve got a few different problems in there (data file too big, log file too big, possible internal & external fragmentation, etc) and I can’t really do justice to that in a blog comment.

      You’re better off posting this in a QA forum like http://dba.stackexchange.com or http://sqlservercentral.com with as much specifics as I can, and the problem that you’re trying to solve. Thanks!

    • I often restore production into non-production environments for our developers, upon which I truncate a very large log table as well. A shrink on the data file should do the trick just fine for you – if you always have a rolling 3-month volume of logging data, you should not have to do this more than the first time since the size of that table should be relatively static. I would personally do the purging in smaller, more often chunks, however – that will ensure you don’t bloat your transaction log during a very short window as probably happens now.

      Note: if you have Enterprise, you might look into getting an estimate on compression to further reduce the size of your table (indexes) and database – but that’s a different topic and has some caveats.

  47. “I have implemented a process in my maintenance plan to trim this log file every week.”

    By log file I mean log table. I do not shrink my log files.

  48. Ok, so I now know that I should avoid shrinking databases and files if at all possible. It’s not possible. I have a SQL server supporting a SharePoint environment and we’ve implemented BLOB Storage. Upon updating our EBS provider software we had to import over 1TB of data into SQL and then externalize it back to disk. So I am trying to free up over 1TB of disk so I can shrink my Windows disk volumes and return the disk space to our VMWare admins.
    So what is the best procedure to safeguard against the potential ills and enable a server admin to reclaim a bunch of disk space?

    • Tom – in that case, I’d check SharePoint’s best practices since that’s really more of a SharePoint question. Generally speaking, though, you can’t shrink Windows disk volumes online, so you’re talking about an ugly outage.

  49. Pingback: Tipps & Tricks: Datenbanken richtig verkleinern | Robert Panther on SQL Server

  50. Pingback: T-SQL Tuesday #52 – Arguments | Voice of the DBA

  51. Pingback: Reducing database size for your development environments | ArthurMacintosh.com

  52. Hello Brent,

    I am supposed to restore production databases onto development servers. Those dev servers have about 30% of the diskspace of their counterparts in production. The only way I’ve found to restore the production databases, is to purge everything older than three months and shrink the database files. That of course takes forever and a day. Any suggestions on how to avoid shrinkfile and still restore just three months worth of data? Keep in mind, that there are over 10 instances and about 7 databases per instance weighing from anywhere to 50GB up to 1TB.

    Sincerely,
    Craig

    • I do a lot of production restores into non-production environments too Craig – the options are rather limited. I end up purging out some unnecessary data after the restore, then shrink it down to reclaim some space. There’s really no way to reclaim the space unless you want to pipe out the data into a fresh database (minus what you would otherwise purge) but that could be quite cumbersome. I also use compression in my development environment – that gets me a TON of space back (reclaimed 1.5TB last weekend alone compressing some AX databases).

    • Craig – buy more database space. Seriously, space isn’t that expensive these days. (And if you’re using enterprise storage, then it’s time to get into SAN snapshots, which do this whole process with limited to no additional space.)

      • I’ll put you in touch with my SAN Admin, Brent – and he’ll likely invite you to a subsequent board meeting 😉 He’s doling out space ALL the time, so when I CAN, I try not to bother him. I see absolutely no reason to not remove some data after the restore and shrink to preserve space. After all, I only restore our Dev/QA/Staging environments once every few months.

  53. Hello Allen, thanks for the suggestion. It just seems odd that MSSQL requires you to restore empty space. When restoring databases in MySQL and PostgreSQL, the indexes and heaps are reorganized and empty space can be reclaimed. Perhaps the shrink would be faster on the target server after restore?

  54. I restore from backup (on the target) and do the shrink on the target, as to not impact production. I then RDP to the target and kick off the shrink – that way my laptop doesn’t come into play. I do enjoy going home after work (with my laptop) and not watching a shrink run overnight 😉

    My shrink on development resources takes maybe 10-12 hours and that’s a 500GB database roughly.

    It is a shame that the empty space isn’t somehow cleared out during the restore – it’s something I tried various ways to overcome, but had no such luck. Restore/shrink it is.

  55. “Craig – buy more database space. Seriously, space isn’t that expensive these days.”

    You’d think SAN space/disk was some sort of Unobtainium when dissucussing it with Storage/SAN Admins. ;o)

    • I agree Scott – we just got a new EMC last year and I’ve already been given 4TB of disk for my development server (quite a bit more than the 500GB I had when hired three years ago). Unforeseen acquisitions and projects have gobbled that right up (we went from three 500GB non-prod AX environments to about 12 now) so we have to do what we can as DBA’s to make the most of our resources. I use compression and shrinking as my primary method to make the most of what I have at this time – in non-production, of course. Never in production.

  56. Pingback: Database Maintenance Plans: A Primer - SQL Server - SQL Server - Toad World

  57. I would agree that in normal everyday operations, you don’t need to shrink. But not all operations are everyday or normal. A lot of houses move their data files around without checking for things like consistency. Clone machines and move databases over. The log file connected with a database may no longer be relevant to the database’s current state. The inactive area might be related to bulk transactions that are no longer relevant to that database.

    And consider: If there was NEVER a reason to do it, why is it available?

    Are we supposed to engage in that other “Don’t Ever Do It” behavior, switching recovery modes?

    • Mike – you can also buy unfiltered cigarettes. They must be a good idea, right? If there was never a reason to do them, why would they be available?

      Keep shrinkin’ them databases. I hear it’s great for your T-Zone.

    • Mike – I think I understand what you are trying to say.

      To add to Brent’s great analogy, just because something is available – doesn’t mean it’s a good idea for our production databases (auto-shrink, auto-update stats off, DBCC DROPCLEANBUFFERS, etc.). These are tools we have for very particular situations. Shrinking a log file might be appropriate for a development environment, where the 80GB trans log is overkill (especially when you have dozens of such non-production environments). I also think it’s appropriate for “abnormal” situations, like a one-time data load/archival, for example (you alluded to that, so I concur). Consistency checks and/or index maintenance are by no means “abnormal”. But under no circumstance would I condone a routine shrink of my production trans logs.

      • Yes, that was the point I was trying to get across. If you’re looking for disk space, dumb move. If you’re moving the database to a new instance, anyway, or have requirements that fit the profile, only an absolutist would sneer at the options. Or somebody who’s never actually worked with the nuts and bolts of recreating environments on an everyday basis. Oh, and by the way, I too advocate for people just buying the disk space. But if you’ve ever worked in government, you’d know it’s not cheap everywhere. We get charged by the Gigabyte, both for the virtual machine and disk space allotted.

  58. Brent,

    I’ve been creating a process to go through all our databases and analyze the log structure, backup, shrink, and resize appropriately taking operational requirements and VLF sizing/amount into account. My code is working beautifully, but I encountered something in testing today I wasn’t expecting.

    I have a good sized DB with a 104GB log and around 900 VLFs. I backed up the log and shrank down to next to nothing during a period of low activity. I then began to grow it back out to it’s original size in 8GB iterations, keeping VLFs around 512MB. During regrowth, I was curious about DB availability and started running random SELECTS against different DBs. Results returned without issue. I then created a table and looped a bunch of inserts.

    I was surprised when the log continued to grow and the table creation and inserts completed without issue or delay. I was under the impression the DB would somehow be locked down, blocking DDL and DML, until regrowth completed. Of course, considering the circular nature of the log, I can see how this would be a false assumption. But I can’t find anything on the web one way or the other.

    Could you possibly shed some light or point me int he right direction on what is going on? Thanks!

    • Steve – this is beyond the scope of something I can answer fast in a comment. Sounds like a great question to ask on http://dba.stackexchange.com. Thanks!

    • As Brent alluded to, that’s not a small topic, but if you haven’t yet – Kimberly Tripp’s post on the topic is one the the key posts I’ve seen on the topic.

      http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

      I personally have not had contention issues re-sizing my trans log when I’ve implemented the aforementioned advice.

    • @Allen, I’ve read Kimberly’s blog several times which is what sent me on my quest to make all our logs health and happy. I’m just trying to find out what happens during the growth process with regards to DB availability to end users. I have a question on Stack Exchange as Brent suggested but I’m not getting answers there either, only more questions about how I can prove what I observed. While the questions may be legitimate, they don’t really help me answer my question – Is a SQL Server Database able to be written to during log growth?

      • Steve – you can set up an experiment to prove that really easily. Start a database with a 1mb data file and 1mb log file. Start one window growing the log by, say, 50GB. In another window, try to do inserts/updates/deletes in that same database.

        • Brent, that is exactly what I did monitoring the log and time stamping everything. I grew the log out in roughly 8GB chunks and in between each growth iteration, wrote to the same test table I was doing inserts on. That way I could see the time stamps and any delays. Each 8Gb chunk took roughly 20 seconds to zero out.

          What I saw was log growth did block writes…for like 46 ms! You can see the lock occur and then the writes started right back up again before the new portion of the log was initialized. That part was very interesting to see – the writes occurring in the active portion of the log while the new inactive portion was being zeroed out! Definitely not what I expected.

  59. Pingback: clear SQL Server transaction log | Coding and Programing

  60. Pingback: How to: How do you clear the SQL Server transaction log? | SevenNet

  61. Pingback: Fixed How do you clear the SQL Server transaction log? #dev #it #asnwer | Good Answer

  62. Brent,

    I have an issue and have read all of the competing views on the subject and am hoping you can provide your take on the problem.

    Earlier this year I inherited a large database (500+ GB) of which 350GB are base64 encoded binary files (pdf’s, images, word docs, etc…) stored in of all things nvarchar(max) fields.

    The rearchitecture of the entire platform has me moving the software to Azure and I will be leveraging encrypted table and blob storage for that 350GB worth of files.

    Part of my migration strategy is that after the files are moved off to Azure and all of the software is leveraging the web service to store/retireve them, was to wipe out those columns in the 3 tables and then shrink the database, reclaiming a significant amount of disk space that will make the rest of the project much smoother (database migrations, portability, backups, etc…)

    However I have read all of the pro’s and con’s and am not sure the best way to tackle this. I cannot make a clean port of this (too much downtime) if I do not get some or most of the space back but downtime in the production environment must be kept to a minimum.

    Is there anything short of using shrink to retrieve the space and give me the flexibility I need?

    Thanks,

    JD

    • Hi JD,

      I’m not Brent, but I was in the comments and this caught my eye.

      I actually don’t have any problem with one-time shrinks of a database if you have a lot of space to reclaim, but your situation is a little special. Dropping a column is a metadata only activity, but reclaiming the space is a bit more complex. You can get an idea of what’s involved by reading the Books Online Page for DBCC CLEANTABLE: http://msdn.microsoft.com/en-us/library/ms174418.aspx

      In your situation it might be faster and simpler to just create new tables and do a minimally logged insert into those new tables of just the columns you want, and then do a final backup and drop the other tables. (Or insert into a new database.) I just mean this from a speed/efficiency reason. The whole process of DBCC CLEANTABLE, then shrinking the files, then rebuilding indexes can just be painful and slow.

      Test it all out and just see which process works better for you.

      Kendra

      • Kendra,

        Thank you for the thorough reply. Let me first by saying that while I have spent many years designing large scale software systems, the inner workings of true database administration are still not in my expert toolkit :).

        I have just spent some time reading up on DBCC CLEANTABLE and the part that is still not clear to me is whether or not it will simply free up space that can be reused OR it will actually shrink the size on disk. I suspect not but I wanted to confirm. One of my biggest concerns is the size of the database currently. Not from a cost perspective but in the difficulty of working with it, from the backups (using RedGate today) to the multiple development environments and if I can get the database down to the size it should be, even padding it significantly will make the rest of the project much more manageable.

        Thanks,

        JD

  63. What do you think about suggestion #8 in the following URL? Is ays to reduce internal log fragmentation (1) do a manual checkpoint if Simple recovery (2) do a ShrinkFile TRUNCATEONLY (3) ALTER TABLE giving the log file a specific size.

    http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/

    • Hi Brenda, That post is about doing a one time operation to resize the transaction log– they aren’t advocating doing regular shrinks of your files.

      • Sorry, I was not clear. It says that the three steps will reduce internal log fragmentation. Do you think it would reduce fragmentation?

        I am one of those shops that does a DBCC ShrinkFile(file, 1) for the data and log every night. Looking to stop that – obviously. But wondering what is the next step to then defrag the data,log,indexes for more efficient performance moving forward.

  64. OK, so this article and nearly every other article I can find says the same thing over and over, “don’t shrink, don’t shrink, don’t shrink”. Obvious next question is, why not? Everyone seems to agree that shrinking a database results in a heavily fragmented index, which is bad. So my, currently unanswered, question is this, “Is it still abhorrent to shrink a database if you follow the shrink with a defrag of the index?”. It is fairly trivial to write a SQL job to defrag an index, as far as I’ve seen anyway, so instead of telling everyone that shrinking a database is the absolute worst possible action you could ever do in SQL, wouldn’t it be more informative to say that yes shrinking is bad for indexes but if you defrag the index after the shrink then you avoid the world on fire scenario which results from shrinking without defragmenting?

    • BH – let’s take a step back and ask, “What are you doing that requires the shrink/defrag activities?”

    • To add to Brent’s question – if you are shrinking after doing maintenance every time, then why shrink to begin with, only to see it grow back to where it was? Just right-size it to support your operations (including maintenance – keeping VLF’s in mind if you so desire) and leave it be.

  65. One operation that I had to do and that requires a shrink was removing a ndf file from a database. When our dev environment was created, every user database had 1 mdf and 7 ndfs. A decision was made to remove one ndf file so that each of the 7 database files could be spread across 7 disk volumes. Removing the “now extra” database file, requires a shrink to empty the file.

    http://dba.stackexchange.com/questions/91719/dbcc-shrinkfile-emptyfile-not-emptying

  66. I think the misconception is that the shrink operation is sooo bad IT SHOULD NEVER BE DONE. Otherwise, to quote Dr. Venkman, “Human sacrifice, dogs and cats living together… mass hysteria!” Ok, not really, just understand what you are doing and why. There will be the exceptions. But shrinking is the result of something else not set up correctly – don’t just treat the symptoms by coding around them.

  67. I need to perform massive delete of rows on a table. I need to reclaim those free space. what’s the best way to do? Shrink Database or Shrink file (s)? I have one .mdf and three .ndf files.

    • If you must shrink, shrink file. It gives you more granular control.

    • Be careful on how many rows you delete in one shot. I had a developer “DELETE * FROM ” not along ago (versus TRUNCATE – doh!). That resulted in a 150GB transaction log and since the process was blocking/would not complete, I had to kill his SPID. Then what? Rollback time… and the rollback would not budge – it was stuck at 0%. So I had to restart the entire instance, which got the rollback rolling – no pun intended. Four hours later the database finally recovered.

      So, when I hear “delete massive rows in table” – it makes me nervous. Be sure to do them in batches.

      • what’s the best way to do it? would you mind writing a script here?

        • Dan, you want to identify something unique in the record and then delete a respectable range – 1 – 10000 or something similar. You can create a loop or use a -gasp! – cursor to iterate through each batch. The trick is to commit each batch so the log can be reused.

          Of course, If you’re going to be deleting the majority or all of the records, consider using TRUNCATE. That will reseed your IDENTITY to it’s starting point (if you have one) so be careful of that.

          As for a script…you should be able to figure it out based on the info here.

  68. So if you have a 50g log file that’s using < 1% of the space, created because someone ran a huge transaction that has since committed, and the logs have been backed up – you are saying that you should not shrink that file? I respectfully disagree.

    I would say the same thing if a large one-time delete was done and a database file now has a huge amount of unused space and you know that database is never going to grow back to the original size again.

    In both cases that space might be needed by another file in the future, and it wont be available.

    Am I wrong?

    • Chuck – I think anomalies lend to special cases where you want to first identify the root cause, mitigate it best you can, and right-size your log based on historical metrics. I’m pretty sure we all do this under such conditions, but I can only speak for myself. When right-sizing, keep in mind the growth rates and be cognizant of your VLF structure/count within the transaction log. Kimberly Tripp has some great articles on how to best shrink and subsequently grow your log out to a good size (Google it).

      As for one-time deletes, when you need to do so, you should do it in iterations so your transaction log doesn’t bloat – not to mention such operations in a single statement are not kind when it comes to concurrency.

    • Chuck it’s impossible to say you are right or wrong – I don’t know your business, your infrastructure, or your resource availability. But so long as you can guarantee no one will run a similar transaction in the future, or that the space created in the data file do to a purge won’t be needed in the next 6 months, then go ahead.

      Again, the point of this article is to address the automated or regular shrinking of files. It creates a lot of unnecessary maintenance overhead that is not great for your databases. If you reallllllly need that 50GB back, by all means reclaim. Just be aware of what you are doing, why you are doing, and how you are preventing the requirement to shrink again.

  69. okay can you tell me if, in this situation, it’s fine, or if i’m making a mistake in my logice.

    massive infrastructure + app upgrade, every-object-touched in the database style procedure. database is ~1.5 TB but we need to keep an read only copy of old database for support reasons. having 2 x 1.5 TB db’s (that are basically the same) seems wasteful and it won’t grow in any systematic way. what i did was blindly page compressed all tables and indexes (gained almost a TB which blew me away from a straight sledgehammer solution point of view) and now i just want to shrink every file to it’s minimum size. performance doesn’t really matter (as long as you can query it). is shrinking a good solution here?

  70. Given your description that it won’t grow, and you compressed to conserve space, it’s quite logical to shrink to realize those space savings. I do that quite often in such cases.

  71. Please tell me if where the errors of my way lie:
    We have a prod DB with a 95GB mdf and a 50GB ldf. We have a software release 3-4 times a year. We take full backups at the beginning and end of the release. Just after the initial full backup, I set the recovery model to simple. I don’t want the log to balloon out during the release (as it has done in the past). After the release I shrink the log (and only the log) down to 10GB. Thereafter, I take a full backup and we test the newly updated application and DB. The log will slowly grow up to 50GB in the weeks to come.
    My primary reason for shrinking the log is the space needed during restore. Restores are often made on other servers for a variety of reasons and we are not awash with space. A 110GB can usually be restored, not always so with a 150GB DB.
    On a final note, we do have 200 VLF files and I have long wondered how to reduce their number. The log file growth size was set to 128MB initially and I upped it to 1024MB. I am having a look at the L. Pesta’s SP sp_DBA_DefragLog now.

    • Sean – if your log slowly grows back up to 50GB, that means you need 50GB of log space for your day to day operations. (Or you’re not doing log backups, one of the two.) As your log file needs to grow, operations in the database stop while SQL Server and Windows clear out that log file growth. The symptoms of this are unpredictable slowdowns and freezes in the application.

      Generally, folks want to avoid unpredictable slowdowns and freezes – so why keep shrinking your log and forcing that to happen?

      Hope that helps!

    • Sean, 200 VLFs isn’t egregious and yes, setting your autogrowth appropriately will help keep that down. Maybe you can add a step in your release and instead of merely shrinking the log down to 10GB you could shrink it down to next to 0 and build it back out in 8GB increments until you are at the desired size. It would look something like the code below. I have a script that generates that code for me depending on the size and auto growth I feed it.

      USE YourDB
      DBCC SHRINKFILE (YourDB_log, 1) WITH NO_INFOMSGS

      SELECT DB_NAME(database_id) DatabaseName,name LogName,size*8/1024 SizeInMB
      FROM sys.master_files
      WHERE database_id = DB_ID(‘YourDB’)
      and type_desc = ‘LOG’

      –>Repeat above if necessary

      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 8192)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 16384)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 24576)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 32768)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 40960)
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, SIZE = 49157)–a few extra bytes to avoid 4GB issue
      ALTER DATABASE [FACETS] MODIFY FILE (NAME = N’YourDB_log’, FILEGROWTH = 1024MB)

      • I see you’ve read Kimberly’s post Steve 😉

        http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

        That is precisely what I do – VLF analysts should be something we check on a pretty regular basis. I’ve found that doing so also controls the growth, not to mention performance.

        • Yessir and reread and reread until I fully understood it. I now have a monthly job that looks across the enterprise and reports back anything waaaay out of whack. I use my scripter to create the required code to remedy the situation.

          We have some pretty robust hardware, so much of the performance impact is masked. However, we also use CDC – you know that thing that reads the log! – for a near real-time datawarehouse solution. We had a problem where the CDC log reader could not keep up in the OLTP DB and log reuse was suffering. The issue was 8k+ VLFs!!! The root of that problem was a poorly defined autogrowth setting early in the product’s life cycle.

          Once that was fixed the problem went away.

          • You gotta love the default growth rate of 1MB – pretty insane, and if overlooked can really lead to some headaches. I’ve had databases in similar situations – tens of thousands of VLF’s. If this has been posted prior, my apologies but I don’t see it posted thus far:

            USE dbname
            GO

            DECLARE @db_name SYSNAME,
            @file_name SYSNAME
            ,@file_size INT
            ,@file_growth INT
            ,@shrink_command NVARCHAR(MAX)
            ,@alter_command NVARCHAR(MAX)

            SELECT @db_name = DB_NAME()
            PRINT ‘USE ‘ + @db_name
            PRINT ‘GO’

            SELECT @file_name = name
            ,@file_size = ( size / 128 )
            FROM sys.database_files
            WHERE type_desc = ‘log’

            SELECT @shrink_command = ‘DBCC SHRINKFILE (N”’ + @file_name + ”’ , 0, TRUNCATEONLY)’
            PRINT @shrink_command
            –EXEC sp_executesql @shrink_command

            SELECT @shrink_command = ‘DBCC SHRINKFILE (N”’ + @file_name + ”’ , 0)’
            PRINT @shrink_command
            –EXEC sp_executesql @shrink_command

            DECLARE @i INT = 1
            WHILE ( @i <= (@file_size / 8000) + 1 )
            BEGIN
            SELECT @alter_command = 'ALTER DATABASE [' + @db_name + '] MODIFY FILE (NAME = N''' + @file_name
            + ''', SIZE = ' + CAST(8000 * @i AS NVARCHAR) + 'MB)'
            PRINT @alter_command
            –EXEC sp_executesql @alter_command
            SET @i = @i + 1
            END

  72. Hi Brent,

    Fair enough, I suppose.
    The log will go through 40 grow operations until reaches its stable position.

    Thanks for the swift reply.
    Sean.

  73. From Paul Randal’s article referenced above: “Although I used to own the shrink code while I was at Microsoft, I never had a chance to rewrite it”

    So there is bad code in a mission-critical product used by millions and the guy in charge “never had a chance” to rewrite it.

    That is how it goes in the real world. I just shrank a datafile and I am not sorry. I have a bunch of other dbs on that server and this one was sucking up all the drive space with 200GB of free space. (Altogether, the db takes 1.2 TB).

    Yeah, it should have its own server, and it should have this and I should do that and there should be a person employed here who has time to do such and such. But like Paul at Microsoft, the real world doesn’t work that way.

    I mean no disrespect – I am a fan – but let’s stop with the “You’re doing it wrong; you’re an idiot” articles. Especially when they don’t stand up to real-world conditions.

    • Jon – so, honest question: if you just shrank a datafile, what brought you to this web page to leave a comment? I’m honestly curious.

      • Always looking to learn. Today was a typical scenario. I had a particular issue to fix and when you are in the middle of working on something, that is always a good time to develop more expertise on the subject. In this case, shrinking data files.

        BTW, I did take your advice to heart and will be careful in the future to avoid shrinking data files unless absolutely necessary. My objection was to the dogmatic, holier-than-thou tone.

        • Jon – gotcha. If you read a few more of my posts, I’m confident you’ll find that I rarely take that tone – except when I’m dealing with extreme situations like you read about here in this post. Enjoy!

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php