How to REALLY Compress Your SQL Server Backups

StackOverflow’s database servers are hosted in a colo facility. They pay by the byte for transfer in and out of their servers, so it’s really crucial to compress the database backups as much as possible. The smaller the backups, the less they pay. They’re already using backup compression, but they wanted to explore more ways to cut their backup sizes.

There’s two things DBAs can do to deflate their database backups, but both of these have serious implications. The only way this compression process will work is if you’ve got a second database server that can:

  1. Restore the production backups
  2. Do some work on them to decrease their size
  3. Back up the smaller databases

Notice I didn’t use the word “shrink” – nowhere in today’s discussion am I going to shrink the databases. Shrinking databases is evil. Instead, we’re going to do some things to lose the fat and keep the muscle.

Do not try what you’re about to read on your own production server. The only reason we can do these at StackOverflow is because we’ve got a second database server available. These techniques should never be used if you’ve only got one database server, because dropping your indexes and changing to 100% fill factor will both give you horrendous performance.

Consider taking this approach with a dedicated fire drill server. In a perfect world, you’ve got a server lying around that you’re using for fire drills: periodic restores of your databases. The whole reason we do backups is so that we can do restores, and you should be testing your backups as often as possible. In this scenario, we’re actually testing every single one of our backups.

Now, on to the shrin…I mean, compressing!

Drop All Non-Clustered Indexes

There’s two kinds of indexes – clustered, and non-clustered. Clustered indexes aren’t really indexes at all; they define how the table itself is laid out on disk. Dropping a clustered index doesn’t save space. Non-clustered indexes, on the other hand, take up space because they’re additional copies of parts of the table. Non-clustered indexes can help performance, but they can be rebuilt from scratch as long as you save the index definition first.

Which of these takes less space to back up:

  • The string “CREATE INDEX IX_EmployeeName ON dbo.Employees (LastName, FirstName)”, or
  • An index with thousands or millions of rows in it

The answer is the string. To minimize what we’re backing up, we need to:

  1. Script out all of the non-clustered indexes in the database
  2. Save those definitions to a table (or a stored procedure)
  3. Create a stored proc that will loop through those index definitions and recreate them later
  4. Drop the indexes

If our database’s content is, say, 30% indexes, we just dropped the size of our backup by 30%. The downside is that after the database is restored somewhere else, it’ll take longer for the database to be fully available, because we have to run the stored proc to recreate all our indexes. If we’re running SQL Server Enterprise Edition, our database will be online while the indexes are recreated.

Here’s a couple of scripts to generate T-SQL scripts for all of the indexes in a database:

To make these work, you’ll want to adapt them so that they insert the scripts into a table for later execution, and then drop the indexes. I know, I know, you’re going to want the script I’m using at StackOverflow, but I’m not quite comfortable publicizing that code just yet. It’s still a little duct-tape-y. I’ll revisit that in the future though.

This index removal task is extremely fast, only a matter of seconds, because SQL Server is just dumping metadata. The next step, however, is much more intensive.

Rebuild Tables with 100% Fill Factor

SQL Server stores data in pages, and you can think of these much like the pages in the phone book. If we tried to fill the pages completely full, then whenever we needed to add a single record, we’d have all kinds of shuffling to do. When we know we’re going to be adding a lot of data, we’re better off leaving some working area on each page so that we can add records without doing a lot of shuffling every time. The default amount of space we use on each page is known as the Fill Factor.

Normally, StackOverflow’s tables wouldn’t have a 100% fill factor because insert performance would be horrible. However, in our backup scenario, we want to cram all of the pages completely full. The solution is to rebuild all of the clustered indexes with a 100% fill factor. The lower your default fill factor, the more space you’ll save by rebuilding to 100% fill factor. This step takes a lot of IO crunching, and again, it only makes sense when you’ve got a second database server available to do all this work.

Our Results: 50-55% Smaller Backups

Cutting backup size in half means:

  • Half the bandwidth expense
  • Half the time to move backups across the wire for disaster recovery
  • Half the long-term storage requirements

Your results will vary depending on your fill factor and your amount of indexes.

Previous Post
MCM Prep Week: How Would You Change the MCM?
Next Post
SQL Server Magazine’s Publisher Files Bankruptcy

45 Comments. Leave new

  • Brent,

    Great post! These are some clever ways to reduce the size of a DB backup file.

    You may have mentioned this in your post and I missed it – sorry if this is a repeat. The describe tasks do save DB backup storage space at the cost of both processing time to create the smaller DB backups (this you mentioned in the post) AND processing time (i.e., longer recovery windows required – RTO / RPO) to undo the compression (rebuilding the desired fill factor on a table and rebuilding the non-clustered indexes – unfortunately, not as quick as dropping them) and get the restored DB back to where it was and performing well.

    Not a criticism, but a point of observation – most things in life are trade-offs (no free lunch).

    Scott R.

    Reply
  • Scott – yes, but remember that we already have the full-size backups going to a file share inside the same datacenter. These smaller files are only for offsite disaster recovery. Due to the length of time it takes to transmit these files offsite, this method actually cuts our RTO/RPO because we can get the files copied, restored, and recreate the indexes faster than we can copy the full-size files.

    Reply
  • Great couple of tips Brent. Thanks…

    Reply
  • i never understood why MS couldn’t write a SQL backup API where the index is backed up as a pointer or whatever and on restore it runs create index. no need to backup the index data except the restore is faster. we run our backups straight to LTO-4 tape and in QA we can restore a 150GB database in an hour or so.

    Reply
  • Wow those are some really good tips! It’s amazing that rebuilding your clustered indexes with 100% fill factor reduced the size of your backups by 50%.

    So let me get the whole process straight: before each backup, you drop all non-clustered indexes, rebuild all clustered with 100% fill factor, backup, then run a proc which rebuilds all non-clustered indexes and sets the fill factor on the clustered indexes back to what it was previously?

    Reply
    • Nick – yes, but keep in mind that this happens on a second system. We don’t do this on the production server. We do a backup in production, restore it on the second system, and then create this newly compressed copy for offsite DR use.

      Reply
  • Great post! We currently use an SSIS task to 7z the backups (smaller databases so this works for now) and send them to a vendor SFTP. But modifying the SSIS task to do something like this might be in our future to minimize the transfer time.

    Reply
  • Well, yes, this is a great idea. Only if you have an extra server, non-clustered indexes and fill factors less than the default 100 (or 0). (Some 80% of the clients I work with have none of the above!) And, you also need to have some mighty fast and reliable drives. 🙂

    Reply
    • Feodor – yes, that’s exactly what I covered in the post. It’s not for everyone. There are very few tips that make everyone’s backup & recovery solution faster universally. If there was an easy button, it’d already be pushed. 😉

      Of course you should have some mighty fast and reliable drives – what OTHER kinds of drives would you have on a database server? 😉

      Reply
  • Reply
  • Brent, reality is oftenly surprising…

    Reply
  • I saw something similar in the MVP Deep Dives book – instead of dropping them, though, the author put the non-clustered indexes in a separate filegroup. Then do a filegroup backup, skipping that filegroup (the index metadata is in the system tables). His restore process is more cumbersome because he doesn’t have the indexes scripted out – which is something that Brent’s approach has.

    So a hybrid approach sounds awesome; use the various scripts to script out the indexes, put the nonclustereds on a different filegroup, and skip the backup of that particular FG.

    Reply
    • Yeah, I liked that approach in MVP Deep Dives, but it requires schema changes – something most shops can’t do, or they have designs for things like partitions that usually put data and indexes on the same filegroups.

      Reply
  • That’ll teach me to post without fully grokking the subject; the Deep Dives problem is that you have to drop the already-existing indexes (via the system tables) in order to create them, and that requires single-user mode.

    But I swear I remember reading about restoring FG backups and not including one, and recreating it later.

    Reply
  • Brent. In my scenario I have old tables migrated from 2005 to 2008 R2, and these are used for historical search (index seek operations). Each month sits on its own partition and each partitions sits in one filegroup and OS file.

    I understand the concept of rebuilding index onto another filegroup and avoid franmentation, but with partitioning I am stuck with the filegroup where the partition schema is defined.

    With that in mind, issuing:

    ALTER TABLE [tablename]
    REBUILD PARTITION = 51 WITH (DATA_COMPRESSION = PAGE, SORT_IN_TEMPDB = ON) ;
    GO

    Will effectively reduce the space used by partition ID 51, but the question is, how do I reduce the file size. Lets assume I got 50% savings and the file is 10 GB. How do I reclaim 5 GB of that file without fragmenting my partition and give it back to the OS?

    Your answer is appreciated.

    Reply
    • After you do the rebuild with compression, do a shrink with truncate_only. That’ll free up the space at the end of the file, but won’t move pages around inside the file, thereby avoiding fragmentation.

      Reply
      • Makes sense. Will try that, thanks.

        Reply
      • I have one of my DBAs trying, but I am still doubting that the datafile will be 100% used. The truncation gets rid of the empty portion at the end, but doesn’t fill the holes right?

        I got this suggestion:

        ALTER TABLE [table] REBUILD PARTITION = [#] WITH (DATA_COMPRESSION = PAGE);
        dbcc shrinkfile ([#],1);
        ALTER INDEX ALL ON [datebase].[dbo].[table] REORGANIZE PARTITION = [#];

        Will the reorganization extend the datafile?

        Reply
        • Oscar – there is no way to get the datafile exactly 100% used without starting a new filegroup and moving objects into it, in the right order, inserted perfectly. This is one of those situations where I’ve gotta ask what you’re trying to achieve. You’re going down the road of diminishing returns here.

          Reply
          • Partition 51 resides on a filegroup that is 100% uncompressed. If I compress the partition, I free up 50% of the filegroup. Wanted to know if I can shrink the filegroup and reclaim 50% of the space not utilized without fragmentation.

          • No, you can’t, unfortunately. By creating a new filegroup, and moving that partition to the new filegroup, you can get that filegroup right-sized to begin with, though.

          • I think you just gave me a great workaround. Moving the partition to a new filegroup. Will try that out. Thanks.

  • Brent,

    Your claim of 50-55% smaller backups is on the resulting compressed backup? What about a regular uncompressed full backup of the real DB versus the trimmed one?

    Compression is really good at shrinking away duplicate data (which is why IIS and apache logs compress so well) so I’m a bit skeptical how removing padding, which removes empty rows, would help.

    Reply
    • Justin – keep in mind that indexes aren’t duplicate data. If I have one index by first name and one by last name, those indexes are duplicate in the sense that the original row of data has both the first and last name, but the backup compression won’t give me 3x compression on that data.

      Reply
  • Did you ever clean up the stackoverflow script and release it? Would love to have.

    Reply
  • Andrej Kuklin
    December 3, 2014 3:44 am

    Instead of scripting the indexes it would be better just to disable them.
    Generic scripting with T-SQL is really not trivial. You’ll need to consider different index options (PAD_INDEX, STATISTICS_NORECOMPUTE, IGNORE_DUP_KEY etc.), partitioning clause, if the index supports a unique/primary key constraint – your script should drop the constraint, not the index. If your pk/uk constraint is being referenced by a foreign key constraint – you need to disable the fk constraint first before dropping the pk/uk constraint. You’ll need to consider data compression options (per partition!), filter definition for filtered indexes, included columns… This all stuff is absent in the script you’re referencing and my list is far from complete.

    Reply
  • This might be worth updating for SQL Server 2014. You could probably get some extra compression on the tables after dropping the non-clustered indexes by converting everything to use clustered columnstore indexes.

    Reply
  • How would the method you describe differ from doing something like the following?

    EXEC sp_msforeachtable ‘ALTER INDEX ALL ON ? DISABLE;’

    I’m a bit fuzzy on it, but I think disabling the indexes actually removes their metadata from the db. If so, you’d think it would achieve the same thing as the more complicated method, while also handling the foreign key links and such.

    Reply
    • From Books Online: “Any index can be disabled. The index definition of a disabled index remains in the system catalog with no underlying index data. Disabling a clustered index prevents user access to the underlying table data. ”

      Be careful disabling indexes in production– that bit about the clustered indexes isn’t always fun 🙂

      Reply
      • Of course, but then we aren’t discussing production here. As Brent wisely pointed out early in the article: “Do not try what you’re about to read on your own production server. The only reason we can do these at StackOverflow is because we’ve got a second database server available.”

        I’m still left wondering if the disabling of the indexes achieves something similar to the more complex technique describes. Especially since the way to re-enable an index is to essentially do a rebuild of it.

        Frankly, I find the inability to access the data to be a plus, since it would be “in your face.” Using your restored database without indexes (without knowing you are) is a recipe for a user-experience disaster.

        Reply
        • Sure, for nonclustered indexes, disabling them removes the space. The metadata is pretty negligible. You’d just have to avoid accidentally rebuilding the nonclustered indexes in the final step he lists as that rehydrates them. (I made that word up.)

          Reply
          • I couldn’t resist a test. On our largest db (102.8GB BAK compressed), it reduced the size by 311MB. For us, that’s probably not worth all the trouble. I guess since we use fillfactor 0, it’s just the index size that got trimmed, and it appears to not be all that much.

            Thanks for giving us so much good sql info. We only stumbled onto the site a month ago, and we already have learned/gained so much.

          • Oh that’s super interesting to know. Thanks for letting us know how the test turned out, it’s always interesting to hear how well things do or don’t work on different systems.

  • That is quite Clever. It is interesting I have 4400 Databases on our Server (each job we run is a Database). Sometimes it is good sometimes it is bad :). IN any case I archive a deatach a lot of Databases as they do not need to be online thus giving more resources to those DB’s that are used. This is a great way to archive the Databases and then when we bring them back on I just run my Index script and fully functional again. Plus it retains more space on the Server, Backup size, and no shrinking… OH what joy!!!!

    Reply
  • In SQL Server 2014 Enterprise Edition, while creating a maintenance plan you can specify compress backup and the backup size is very small. i.e. for 500 GB database the compressed backup size is reduced to only 70GB.
    When you do a restore it turns out fine and i get the original 500 GB database. No need for changing any schema.

    Reply
  • Johnson Weltch
    November 25, 2016 6:20 am

    thanks, this was very helpful. While Searching for this topic i found one more article which was helpful.This article discussed about the advantages:

    (removed)

    Reply
  • Brian Boodman
    August 29, 2017 4:34 pm

    We can do even better:
    1. Take an uncompressed backup, then compress it via LZMA. Note that third-party backup utilities do this automatically. The fancy ones use a file driver, thus reducing disk IO rather than only reducing network traffic. Disabling table/row compression will probably result in better results.
    2. After you remove all your indexes, replace your indexes with clustered columnstore indexes. On my system, this dropped the size of my backup by over 80% compared to LZMA.
    3. Set all your columnstore indexes to archival compression.

    I’ll note that seeing such a large improvement is not abnormal; https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview promises up to 10X data compression.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.