When you’ve got a lot of empty space inside a database, you can use DBCC SHRINKDB or SHRINKFILE in order to downsize the file and free up space on your drives. But should you? I’ll explain how those commands work, why they make performance worse, and give you alternatives.

5 Comments. Leave new

  • matthew.olson
    November 1, 2020 5:58 pm

    I’m an accidental DBA.. we have a couple databases where they do huge data loads at the beginning of the month. And sometimes they will create 300+GB (on a 150GB database) log files. Normally they are around 30-50 GB each. Normally the tlog backups take care of it, but in our test environment the databases are in simple mode and the log files still blow up when they test the data load there first. Simple mode doesn’t stop this from happening and we have daily backups, so once in a while the combined four databases will fill the drive and stop the server from running. I don’t want to run tlog backups, and even then it takes manual intervention to shrink them since we don’t do it automatically. Is there a better way to handle this that doesn’t stop the server cold?

    Reply
    • Matthew – no, unfortunately, if you do a transaction, that transaction’s work has to go in the transaction log. You could try having your developers load data in smaller batches rather than in a single 300GB load in a single statement or transaction.

      Reply
  • Unfortunately it’s all automated, with data coming out of other systems and other data coming out of a mainframe (depending upon the database in question). What makes it worse is that after the data is processed and the smaller result set stored, a lot of that data is then dumped. It’s a process that I’ve inherited and it’s not going to change soon.

    The only thing that I think I can do at this point is make the log drive bigger and/or run a tlog backups more often. The production box has less of a problem because it takes tlog backups every hour. To me this feels unnecessary to do this because the RPO for the test servers is 24 hours. I guess I was hoping for options that I haven’t learned about yet…

    Reply
  • Hey! In the section of this video about VLF counts, you mentioned that you shrink the log file to smallest size and then grow it out to 25%. Could elaborate on how you would implement the growths? Do you perform one growth operation? Or many growth operations of a particular “chunk”? I’ve got some T-Logs that are about 250 GB – 400 GB (I know this class is for 100 GB Data; I’m sorry!), and have very high VLF counts (1000+). Since there is no instant file initialization for log files, I’d like to avoid the performance hit of autogrowths.
    In a maintenance window, I’d like to shrink the t-logs, and then grow them back out to their current size, albeit with fewer VLFS. Autogrowh is set to 256 MB. If I allow autogrow to grow the files back, then they’ll just have a high VLF count again(though likely less than 1000).

    I’ve seen some old advice about growing the file in chunks back out to the original size. Is that worth it? Can I get away with one resizing command? Will that leave me with too few VLF’s?

    Thanks!

    Reply
    • Alex – yeah, this class is about 100GB data. I don’t have a training class online right now for terabyte-level databases – for that, your best bet would either be to post a question on dba.stackexchange.com (free), or hire me for consulting (not free, but also good, hahaha.) Sorry about that! Brent

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu