Blitz Results: Page Verification Not Optimal
When SQL Server writes data to disk, it just assumes everything’s fine. It doesn’t periodically patrol the data on disk to see whether it’s good or not. When data is read back from storage, SQL Server discovers whether or not that data is still good, and that’s where page verification methods come in. This part of our SQL Server sp_Blitz script checks sys.databases looking for no page verification, torn page verification, or checksum page verification.
If databases have no page verification or just torn page verification, we may not be able to do as good of a job recovering from storage corruption. On SQL Server 2005 and newer, we usually want to be on CHECKSUM page verification. SQL Server writes a checksum to each page as it goes out to storage, and then checks the checksum again when the data is read off disk. This can incur a minor CPU overhead, but it’s usually worth it to recover from corruption easier.
To Fix the Problem
In SQL Server Management Studio, you can right-click on each database and go into its properties to change Page Verification to Checksum. This takes effect immediately, but it only affects pages that are written to disk from that point forward. If you want to be more proactive, you could do complete index rebuilds on that database afterwards and run DBCC CHECKDB.
After the change, you can monitor CPU usage percentages looking for significant jumps, which could mean checksum is impacting your system.