Blog

In Michigan where I grew up, we pull the boats out of the water at the beginning of the fall. It’s a bit of a sad time, realizing we’re done having fun on the water, and the seasons are about to change.

Winterized boats at Goose Island Boatyard - Daniel X. O'Neil

Winterized boats at Goose Island Boatyard – Daniel X. O’Neil

To prepare a boat for a few months of storage, we drain some fluids, replace others, give it a good cleaning, do some basic maintenance, and put on a tight waterproof cover.

Databases need winterizing too. I bet you’ve got an application that’s no longer used, but you still have to keep the data online just in case. Or maybe you’ve got archived sales data that we still read, but we don’t modify anymore.

Here’s how to winterize a database:

Rebuild all of the indexes with 100% fill factor. Sometimes we set lower fill factors to prevent page split problems during heavy concurrency, but when a database is going into its winter, we don’t need to worry as much about that. By setting fill factor to 100% and rebuilding the indexes, we get everything packed in tightly. This means denser data – less free space, faster reads off disk.

Update statistics with fullscan. In case we don’t rebuild the indexes, we still probably need to update our statistics. I’d recommend fullscan here because we get a great picture of the data, and then we never have to update stats again on a frozen database.

Create a read-only database login using an Active Directory group. This way, if you need to add additional users to the database for read-only permissions, you can simply add them to the Active Directory group. We don’t have to write to the database in order to pull this off – which comes in important for the next step.

Make the database read-only. Let’s be really confident that the data’s not going to change underneath us. This can also get us a modest performance gain from avoiding locking, and it’ll save me time if I’ve got an inefficient index rebuild script that keeps trying to rebuild indexes even when data isn’t changing.

Do a complete DBCC CHECKDB. We want to know that we’ve got a good, clean copy of all of the database pages.

Test the full backups. Restore it somewhere else, and know that we’ve got a really good backup that can actually be restored. Once I’m confident in that, I may even consider no longer backing up this database – especially if it’s over a terabyte – as long as I know I’ll always have that backup available in multiple places.

At the end of a database’s life, winterizing it gives me one less database to worry about. I know it’s good, I know I’ve got a backup, and I can spend my time focusing on databases that are much more volatile.

↑ Back to top
  1. My only complaint about this post is the picture. Brought chills down my spine!!

    I’ve made the mistake of putting DBs in ReadOnly and then having to change that constantly to give people permissions. Definitely a great idea to figure out your security needs before hand. Great advice, thanks!

  2. Pingback: (SFTW) SQL Server Links 15/11/13 • John Sansom

  3. Doing this early rather than later is a good idea too. I’m doing this with a database where SQL has been off since early 2011 when the database was migrated to a new server, but apparently not all the old data we need. Unfortunately I ran into a corrupted log file. If this had been done right away we would have had a backup to go to. However, since the only DBA at the time has since left and the team that supports the app didn’t even know this was their server I don’t know where to look for a backup.

  4. Brent, I’d suggest adding to that freeing up any space you can in the db without undoing your defrag and releasing all of the log space. A TB sized db that has is being winterized often has a sizable log!

  5. I know shrinking a database in production is not a wise thing to do since it will eventually reclaim that space with new data, but what about shrinking these “winterized” databases? From what I understand, rebuilding the indexes will free up space within the data files, but not on disk itself. Is there any other way to shrink these databases without using DBCC SHRINKDATABASE? Paul Randal says shrinking the database would generate tons of index fragmentation (http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/) but rebuilding the indexes with 100% fill factor would also free up space to be released. So, it seems we would want to rebuild the indexes with 100% fill factor and then shrink the db and then rebuild the indexes again. Is shrinking the database worth all this if we don’t absolutely have to have the space? Is it unnecessary processing?

    Thanks for a good post…and it is timely as well since I’m just now creating a document on how to properly drop a database and a database instance in our SQL Server environments.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

css.php