Blitz Result: Slow Storage Reads or Writes
SQL Server feels the need – the need for speed. Storage speed, to be more specific. One of the most common bottlenecks is underperforming storage subsystems.
SQL Server tracks read and write speeds for each database file – both data and log files. This part of our SQL Server sp_Blitz script checks sys.dm_io_virtual_file_stats looking for average read stalls (latency) over 200 milliseconds and average write stalls over 100 milliseconds. This script breaks it down by file. Yes, those thresholds are horrifically high – but that’s the point. We only want to alert you when the numbers are awful.
What exactly is a bad number here? That’s a really good question, and it’s open for debate. At Microsoft SQL Customer Advisory Team – in point #4 in their Top 10 OLTP Issues, they suggest that reads over 15ms are a bottleneck and that log file writes should be 1ms or less. Keep in mind that this team usually deals with high-performance environments, though – not every SQL Server can afford to be quite this fast.
No matter whose guidelines you pick, I’ve got bad news: you’re already doing worse. You arrived at this page because you have database files that are taking over 100ms for reads and/or 20ms for writes.
To Fix the Problem
There’s two ways to fix slow storage: make the storage go faster, or ask the storage do to less work.
The second option – asking storage to do less work – is usually the easiest way to start. By dropping unused indexes, we can insert/update/delete data faster, run backups faster, do DBCCs faster, and even do index rebuild jobs faster. By using the right indexes, we can avoid storage-intensive table scans and do less data juggling in TempDB.