When SQL Server writes data to disk, it just assumes everything’s fine.  It doesn’t periodically patrol the data on disk to see whether it’s good or not.  When data is read back from storage, SQL Server discovers whether or not that data is still good, and that’s where page verification methods come in.  We check sys.databases looking for no page verification, torn page verification, or checksum page verification.

If databases have page verification set to NONE or TORN_PAGE, we may not be able to do as good of a job recovering from storage corruption.  On SQL Server 2005 and newer, we usually want to be on CHECKSUM page verification.  SQL Server writes a checksum to each page as it goes out to storage, and then checks the checksum again when the data is read off disk.  This can incur a minor CPU overhead, but it’s usually worth it to recover from corruption easier. You can learn more about CHECKSUM in this Microsoft blog post.

Even if our databases have CHECKSUM enabled, we still need to check the checksums to make sure our data hasn’t been corrupted. That’s why sp_Blitz® also checks to see if our backups are done using the WITH CHECKSUM option. This doesn’t ensure an absolutely bulletproof backup – but it just increases your confidence that the pages with checksums are less likely to be corrupt. The faster you can detect and react to corruption errors, the better, and this gets you more confidence without the full overhead of a daily DBCC CHECKDB.

To Fix the Problem

As usual, you’ve got options. You can change the settings in one of two ways:

  1. Change the settings using the database properties GUI. Right-click on each database and go into its properties to change Page Verification to Checksum, then click OK.
  2. Change the settings using a TSQL script. We’ve got an example below.

But checksums won’t be generated until pages are written to. You can either wait and let this happen naturally or hope to the best, or you could choose to run a rebuild against every index in the database.


How to Generate a Script to Change Page Verification on Every Database

If you have a lot of databases, or even just enough to not want to right click and hunt for a setting over and over again, you can run this script to give you the syntax to change all of them at once:

 

This script just creates the TSQL for your change: you still need to copy it and execute it in another window.

After the change, you can monitor CPU use percentages, looking for significant jumps, which could mean checksum is impacting your system.

When you configure your full backups, use the WITH CHECKSUM option to check the checksums each time a full backup is run.

What if I Want Those Checksums on All My Indexes?

If you want protection right away, you need to rebuild every index and table in the database. This can be an IO and CPU intensive operation, but it’s pretty easy to set up: just create a maintenance plan that rebuilds all indexes in your database, and run it once. Monitor it carefully, as this can use a lot of log space, too.

This maintenance plan won’t impact heaps, but those can be problems, anyway.