Maintenance: Shrinking Files
When you’ve got a lot of empty space inside a database, you can use DBCC SHRINKDB or SHRINKFILE in order to downsize the file and free up space on your drives. But should you? I’ll explain how those commands work, why they make performance worse, and give you alternatives.
7 Comments. Leave new
Leave a Reply Cancel reply
- Backups 1: 3 Common Strategies
- Backups 2: Restores
- Backups 3: Setting Up Maintenance Plans
- Backups 4: Setting Up Ola Hallengren’s Maintenance Scripts
- Backups: Reading from Databases During Restores
- Configuration: Anti-Virus
- Configuration: Instant File Initialization
- Configuration: Lock Pages in Memory (LPIM)
- Configuration: Prepare for Emergencies with the Remote DAC
- Configuration: Sending Emails with Database Mail
- Configuration: sp_configure Settings
- Configuration: TempDB Files and Sizes
- Corruption 1: How it Happens, and How to Detect It
- Corruption 2: DBCC CHECKDB for VLDBs
- Maintenance: Agent Jobs
- Maintenance: Patches: Which Ones to Apply, When, and How
I’m an accidental DBA.. we have a couple databases where they do huge data loads at the beginning of the month. And sometimes they will create 300+GB (on a 150GB database) log files. Normally they are around 30-50 GB each. Normally the tlog backups take care of it, but in our test environment the databases are in simple mode and the log files still blow up when they test the data load there first. Simple mode doesn’t stop this from happening and we have daily backups, so once in a while the combined four databases will fill the drive and stop the server from running. I don’t want to run tlog backups, and even then it takes manual intervention to shrink them since we don’t do it automatically. Is there a better way to handle this that doesn’t stop the server cold?
Matthew – no, unfortunately, if you do a transaction, that transaction’s work has to go in the transaction log. You could try having your developers load data in smaller batches rather than in a single 300GB load in a single statement or transaction.
Unfortunately it’s all automated, with data coming out of other systems and other data coming out of a mainframe (depending upon the database in question). What makes it worse is that after the data is processed and the smaller result set stored, a lot of that data is then dumped. It’s a process that I’ve inherited and it’s not going to change soon.
The only thing that I think I can do at this point is make the log drive bigger and/or run a tlog backups more often. The production box has less of a problem because it takes tlog backups every hour. To me this feels unnecessary to do this because the RPO for the test servers is 24 hours. I guess I was hoping for options that I haven’t learned about yet…
Hey! In the section of this video about VLF counts, you mentioned that you shrink the log file to smallest size and then grow it out to 25%. Could elaborate on how you would implement the growths? Do you perform one growth operation? Or many growth operations of a particular “chunk”? I’ve got some T-Logs that are about 250 GB – 400 GB (I know this class is for 100 GB Data; I’m sorry!), and have very high VLF counts (1000+). Since there is no instant file initialization for log files, I’d like to avoid the performance hit of autogrowths.
In a maintenance window, I’d like to shrink the t-logs, and then grow them back out to their current size, albeit with fewer VLFS. Autogrowh is set to 256 MB. If I allow autogrow to grow the files back, then they’ll just have a high VLF count again(though likely less than 1000).
I’ve seen some old advice about growing the file in chunks back out to the original size. Is that worth it? Can I get away with one resizing command? Will that leave me with too few VLF’s?
Alex – yeah, this class is about 100GB data. I don’t have a training class online right now for terabyte-level databases – for that, your best bet would either be to post a question on dba.stackexchange.com (free), or hire me for consulting (not free, but also good, hahaha.) Sorry about that! Brent
Thank you for your video.
Our existing SQL DB is running out of space due to huge log file, ~2.4 TB, due to “full” recovery mode.
We’ve setup a new win 2019 server with new sql 2016 server on it.
We then did a full backup of the existing DB and restored it on this new server. But after restoring we realized that the log file on this new server has still the original size of ~2.4TB
When shrinking the log file on the new server if we choose “Release unused space” nothing happens! and the log size won’t change.
1. Can we go with “Reorganize pages before releasing unused space “? If yes, how much should we put for the value of “Shrink file to”? What would be the impact on our application which which is using this Database?
2. What should we do for future if we don’t want to run into this situation again?
( On new server)
Currently allocated space : 2.34TB
Available free space: 2.33 TB (99%)
Appreciate your comment
Have a good day.
You have to do transaction log backups when the database is in full recovery model.