What If You Really DO Need to Shrink a Database?

You’ve heard that shrinking a database is bad because it introduces both external and internal fragmentation, it causes blocking, it causes transaction log growth while it runs, and it’s slow and single-threaded. You understand that if it’s just a matter of 10-20-30% of a database, and the database is only 100-200GB, you might as well just leave the space there, because you’re gonna end up using it anyway.

Eyes up here, kid
No, you don’t need to shrink your 50GB database

But your situation is different:

  • Your database is 1TB or larger
  • You’ve deleted 50% of the data
  • You have 500GB+ empty space
  • You’re never going to need that space because you’re now doing regular deletions and archiving

You’ve been tasked with reducing the database size to make restores to other environments easier, like when you need to restore to a QA or development environment. You don’t want those servers to need so much useless space.

Don’t shrink. Do this instead:

  1. Add a new filegroup, add new empty files in it, and set it to be the new default filegroup
  2. Move objects to the new filegroup using CREATE INDEX…WITH DROP_EXISTING = ON, ONLINE = ON commands
  3. When you’ve moved all the objects over, shrink the old filegroup, and it’ll shrink super-quick
  4. Just leave it there – this is your new database layout

This approach has several key advantages over shrinking:

  • It can use parallelism: you can specify how many cores to use with MAXDOP hints. This is really helpful on servers with MAXDOP = 1 at the server level because your query-level hint will override it, even overriding it upwards.
  • It can move a lot more than one 8KB page at a time. Heck, you can even do several index recreations simultaneously on different threads if you want to, really churning through the work quickly.
  • You pick the days/times for each object: maybe some of your objects have very heavy concurrency during some time windows, and you want to avoid touching those until a maintenance window.
  • You pick the settings for each object: rebuilding an index with ONLINE = ON is slower. Some of your objects might be archive tables or unused during certain days/times, so you can use ONLINE = OFF on those to get faster performance.

But it does have a few drawbacks, too:

  • It’s faster, but it’s also higher load: shrinking a database is a low overhead process: it’s hard for anybody to notice you moving just one 8KB page at a time, with just one CPU core. Index recreations, buckle up: people are gonna notice when you throw a ton of CPU cores at the problem and really start hammering your storage. This is just the flip side of the coin to finishing faster: if you wanna finish faster, you’re gonna do more work in less time.
  • This also means it generates more logs, faster: because we’re moving so much data and it’s a fully logged process, this can present problems for transaction log backups, log file sizes, database mirroring, Always On Availability Groups, and storage/VM replication. Ease into this gradually, starting with your smaller tables first, so you can see the impact it’s having on your transaction log sizes.
  • ONLINE = ON isn’t fully online: even online index recreations need a momentary schema mod lock in order to finish their work. Thankfully, since SQL Server 2014, we’ve had the WAIT_AT_LOW_PRIORITY option to help mitigate that problem.
  • You have to do some prep work: it’s easy to run DBCC SHRINKDATABASE, but the index recreation approach will take much more work if you want to leverage all the cool advantages I discussed above.

Thankfully, there’s help on that last one: Bob Pusateri has an in-depth writeup and a script to help.

Previous Post
Where is the SQL Server Community Networking Online?
Next Post
Get Alerted When Your SQL Server Restarts with sp_SendStartupEmail

39 Comments. Leave new

  • Leonardo Carneiro
    July 15, 2020 2:16 am

    I understand the drawbacks of shrinking the database, but is a really bad crime to shrink the log after a big maintenance?

    Reply
    • Assuming that you’re ever going to do maintenance again, yes. Growing the log back out is a blocking operation while the file is erased, and Instant File Initialization doesn’t apply to the log file.

      Reply
      • Besides the size of the t log there is also the amount of vlfs which if you have too many due to the log growth msy have an impact if there are components that read the log such as logreader

        Reply
    • I’d say it depends. If your Log file usage is usually only 10 GB but it grow because you did a ton of maintenance to 200 GB, you can shrink it to the usual 10 GB again (if this maintencance will not occure regularly).

      But remember to run CHECKPOINT twice (!) and a LOG backup before you shrink the file (and ensure that there are no running tasks), otherwise it is possible that nothing happens (since you are using currently the last part of the file).

      Reply
  • For the new empty file(s), is it recommended to make it’s initial size 500GB, with a 1GB filegrowth? This to omit the database autogrow event?
    Or is better to make it eg. 10GB and let it autogrow around 490 times?

    Reply
    • Filip – whatever the new target size needs to be (based on the data you’re moving into it), go ahead and create it at that size.

      Reply
    • Your goal is to have autogrows never happen, and if they do happen, to minimize their frequency. You want to size the database to whatever will allow it to write for a long time without having to autogrow, and your autogrow should be sized so that if it has to happen, it happens as infrequently as possible while not being so large it crashes your applications when it does.

      The rules for log autogrow size are less somewhat less nebulous in 2014+, always 64 Mb at minimum, and something that divides cleanly by 16, while the product of dividing it by 16 should also be evenly divided by…its either 8 kilobytes or 64 kilobytes. I typically do 64 * 2^n and it is almost always 128

      Reply
      • Thanks for the reply, and yes, that _is_ indeed the goal. You know as well as me that in real live, we start with a little database, few years later, is 10GB with probably 512MB increments, so we upped the increments to 2GB. Now 15 years later our db is around 1 TB (10GB increments).

        Even if we had know, that the db would be 1TB one day, we couldn’t create an empty 1TB back in the day because there were no affordable 7200 RPM HDD back in the day. 🙂

        Whatever we do, we’ll always keep having autogrows. But maybe we could clean them up with a new empty filegroup one day.

        Reply
  • Brian Boodman
    July 15, 2020 7:10 am

    “You’ve been tasked with reducing the database size to make restores to other environments easier, like when you need to restore to a QA or development environment. ”

    Since everyone is using generated data in their QA and development environments, this isn’t a concern at all. Nobody uses production data for that stuff (*) .

    (*) Lies

    Reply
    • Exactly, in the real world you may have many hundreds of tables all with interlocking data, trying to create data for ALL these tables would be a major job, possibly impossible. Also you still need to be able to validate data in test and live environments. We have many situations where data entered on live is NOT correct and caused issues with other parts of the system. WE generally have a de-personalised live database. It is the ONLY way of doing a lot of problem resolution and testing.

      Reply
  • Biggest caveat i found with this was “textimage” also known as blob data. It doesn’t move when you run this. Only way to move that is creating a new table, unless something has changed with this since 2014.

    Reply
    • Yep, go ahead and read the post that I linked to – it explains how that limitation applies to only image, ntext, and text columns, which are deprecated anyway.

      Reply
      • I have had the pleasure of doing just that after the size of a db has been reduced from 19 tb to 8 tb

        Reply
      • There is a walkarond i have come across by Kimberly that allows to move lob data by creating a partition table.
        I have used it with dynamic code to automate the process

        Reply
        • Thank you Brent and Yaniv. I’ll have to check out Kimberly’s article too though I’m hoping to never shrink again.

          Reply
  • Hello Brent, correct me if I am wrong, moving all objects from 1 file group to another will double the size of the database before shrinking the file?

    Reply
    • Hany – nah, check back to the bullet points at the beginning of the post: we’re talking about 1TB database where half (or more) of the data has been deleted. You’ll only need enough space for the remaining objects, so in that case, we’re only talking about 50% more space, and even that is only temporary.

      Reply
  • Why not remove the old files + filegroup?

    Reply
    • Because the old filegroup is usually PRIMARY, and you can’t remove that one.

      Reply
      • Thomas Franz
        July 30, 2020 7:13 am

        Furthermore you should have an empty (beside system objects) PRIMARY filegroup in every database. If you are using multiple filegroups, you can do a partial restore with only one or x filegroups, but the PRIMARY filegroup will always be restored.

        So when you did an ups-DELETE / -UPDATE, you can restore only a single 1000 GB filegroup (plus the empty PRIMARY) instead of the whole 1 TB database which will save you a lot of time…

        Reply
  • Martin Guth
    July 22, 2020 1:17 am

    Hi Brent,

    great post and interesting idea.

    I have been moving data between filegroups in the past (during a server migration) and originally had several issues regarding LOB-data (for example XML) which could not be easily moved using a rebuild command. It turned out that one can apply a trick temporarily partition the table and thus the lob data is moved and you do not have to jump through all the hoops of creating completely new tables instead. There even is a nifty stored procedure around to automate that stuff for you: http://sql10.blogspot.com/2013/07/easily-move-sql-tables-between.html

    Best regards

    Martin

    Reply
  • Thomas Mueller
    July 22, 2020 8:17 am

    What process would you recommend for hosted Azure SQL Database where we do not have access to some of these operations?

    Reply
  • I have had the pleasure of doing just that after the size of a db has been reduced from 19 tb to 8 tb

    Reply
  • Jeronymo Luiz
    July 29, 2020 11:19 am

    Hello Brent, in the company where I work, I have a server, with space problems, I have to check disk space daily, the databases are in recovery model simple, and since only loads are made, and little or no consumption of these databases I shrink with a certain frequency, because after a load the size is increased, but I need to control disk space. I don’t want to say that this is the right way but what I can do at this point. and this article of yours, takes me to another level of thinking, I will try to make this adjustment here on my server. as always, your clear posts always take me to a new Kaizen, today better than yesterday, tomorrow better than today. if you can, say hi to Brazil. I wish you the best of luck in the world.

    Reply
  • IJeb Reitsma
    March 12, 2021 2:26 pm

    According to the ALTER INDEX documentation this procedure is not possible.

    “ALTER INDEX cannot be used to repartition an index or move it to a different filegroup. … Use CREATE INDEX with the DROP_EXISTING clause to perform these operations.”

    https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-ver15
    https://docs.microsoft.com/en-us/sql/relational-databases/indexes/move-an-existing-index-to-a-different-filegroup?view=sql-server-ver15#Restrictions

    Reply
  • IJeb Reitsma
    March 12, 2021 4:11 pm

    In my comment above the word filegroup after on is removed twice.

    Reply
  • OMG you are such a lifesaver! I needed to do this for a couple of ancient SK28 databases that had a bazillion rows of data from several years that I had to archive and then trim the fat database down to a more manageable size with just 2 years worth of data. I’m going to look like a hero to our Legal & Security Department when this fun project is finally over.

    Reply
  • Carlos Bercero
    November 2, 2022 3:32 pm

    Hi Brent, hope you’re doing well!

    Would you please provide an actual example of

    ALTER INDEX REBUILD ON [Another_Filegroup] ?

    Thank you!

    Reply
  • Hi Brent,

    I am migrating my crm dynamics application to azure as SaaS, microsoft has given a foot long list of pre reqs to be done before giving them the database backup, and one of them is not having datafiles more than 250 gb each and another one is having only one primary file group. I have 6 tb database in an always on availability group of 3 replicas, and 2 datafiles of around 3 tb each, now I am taking 8 hours of downtime every saturday and shrink/emptying the file into other 4 data files sized and limited at 250 gb each. However this is getting me to nowhere as I am able to empty the file 1 by 20% and then over the week it is again getting filled by 30%. After emptying file, I cannot shrink the file and limit the size because that is too slow when it has to shuffle the data around. I was thinking of emptying the file one completely until system data is reached and then, I will quickly shrink and remove the file.

    Do you recommend something better than this utterly slow plan? Getting anxious as it is getting closer to migration cut over.

    Reply
  • We have a 1.5TB DB and are purging about 800GB of historical data and installing jobs to do weekly purges of aged data. I’m amazed at how I can find guidance from http://www.brentozar.com for every SQL Server problem I ever faced! Thanks Brent!

    While the file group approach is great, unfortunately our SAS client has SQL Server Std. Ed. where Online Index Rebuilds are not available. There is also Off-Row / LOB data issue which I faced in the past (SQL 2008, I think). Index Rebuilds into new filegroup did not move LOB data. Our clients are in SQL 2014. I’m Not sure if MS fixed this in the recent versions. Brent would know!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.