SQL ConstantCare® analyzed over 3,000 of your SQL Servers this month. Here are the things we check for and the number of servers where we found this problem this month:
Backup
- Back Up Your TDE Certificate – 137 servers affected
- Take a Full Backup – 370 servers affected
- Take a Log Backup – 353 servers affected
Corruption
- Database Corruption Detected – 5 servers affected
Patching
- Dangerous Build with Known Corruption – 11 servers affected
- Dangerous Build with Security Bug – 26 servers affected
- Update Available – 1,696 servers affected
- Upgrade to 2016 SP1 for Freebies – 15 servers affected
Performance
- Auto-Update Stats Disabled – 40 servers affected
- Change Repetitive Maintenance Plans – 105 servers affected
- Consider Enabling Query Store – 1,029 servers affected
- Consider Migrating to a 64-Bit Server – 22 servers affected
- Consider Re-Enabling Indexes – 129 servers affected
- CPU Schedulers Offline – 52 servers affected
- Deadlocks Occuring – 2 servers affected
- Default Parallelism Settings – 110 servers affected
- Disable Auto-Close – 70 servers affected
- Disable Auto-Shrink – 101 servers affected
- Drop Unused Indexes – 150 servers affected
- Duplicate Indexes – 40 servers affected
- Enable Instant File Initialization – 392 servers affected
- FYI: Resource Governor Enabled – 103 servers affected
- FYI: Server Triggers Enabled – 244 servers affected
- FYI: Unusual SQL Server Edition – 117 servers affected
- Heap Needs to Be Rebuilt – 585 servers affected
- High Memory Use for In-Memory OLTP (Hekaton) – 3 servers affected
- Low Waits: SQL Server Was Bored This Week. – 2,367 servers affected
- Memory Dangerously Low – 22 servers affected
- Memory Dangerously Low in NUMA Nodes – 23 servers affected
- Memory Leak – 7 servers affected
- Memory Nodes Offline – 10 servers affected
- Memory Pressure Affecting Queries – 62 servers affected
- Plan Cache Erased Recently – 217 servers affected
- Poison Wait Detected – 469 servers affected
- Poison Wait: CMEMTHREAD – 35 servers affected
- Poison Wait: Serializable Locking – 131 servers affected
- Remove the Shrink Database Job – 116 servers affected
- Remove the Shrink Database Plan – 22 servers affected
- Server Power Changed – 100 servers affected
- Set Fill Factor Higher – 385 servers affected
- Set MAXDOP to Reduce Parallelism Waits – 106 servers affected
- Set Max Memory Lower – 198 servers affected
- Too Much Free Memory – 375 servers affected

Planning
- Database Size Increased – 205 servers affected
Query Tuning
- Compilation Timeout – 704 servers affected
- Joining to Table Valued Functions – 586 servers affected
- Queries may have non-SARGable predicates – 1,057 servers affected
- Query Forced to Run Single-Threaded – 803 servers affected
- Query Slowed by Implicit Conversion – 387 servers affected
- Query with Missing Index Request – 709 servers affected
Reliability
- Call Microsoft to Report Memory Dumps – 131 servers affected
- Check for Broken Log Shipping – 70 servers affected
- Check for Corruption ASAP – 820 servers affected
- Check Unusual Database States – 202 servers affected
- Disable Priority Boost – 28 servers affected
- Enable Checksums for Corruption Detection – 320 servers affected
- Enable Remote Admin Connections – 567 servers affected
- Full Text Indexes Not Updating – 49 servers affected
- In-Memory OLTP Transaction Errors – 4 servers affected
- Move TempDB Off the C Drive – 174 servers affected
- Move User Databases Off the C Drive – 209 servers affected
- No Cluster Nodes Available for Failover – 9 servers affected
- Restart to Correct TempDB File Error – 10 servers affected
- Server Keeps Restarting – 157 servers affected
- Transaction Log Larger than Data File – 735 servers affected
- Unsupported Build of SQL Server – 458 servers affected
And The Top 10 Overall
- Update Available – 1,676 servers affected. I get it, patching is hard, and nobody’s doing it.
- Queries may have non-SARGable predicates – 1,076 servers affected.
- Consider Enabling Query Store – 1,025 servers affected. Keep in mind that this is 1,025 servers that are patched to the level where I’d be comfortable telling them to turn on Query Store, and…they’re still not doing it. I haven’t gotten to the point where I’m going to stop recommending it, but I’m getting pretty close.
- Check for Corruption ASAP – 824 servers affected. This one just blows me away: even when people know they need to do it, they won’t. Sure, sometimes they offload it, but there are even tiny sub-100GB servers in here that aren’t getting checked. That’s why I’ve asked Microsoft to run DBCC CHECKDB in the background.
- Query Forced to Run Single-Threaded – 819 servers affected. People love them some scalar functions, and we’re only warning you about queries in your top ~50 resource consumers.
- Transaction Log Larger than Data File – 736 servers affected. This alert filters for log files over 5GB, too, so it’s not even like these are small databases. Folks just don’t know how to get their log files under control for some apps, and they’ve given up trying. (I need to write a blog post about the rules that are getting ignored the longest.) Why would I be concerned? Because it affects restore times: the log file doesn’t get Instant File Initialization.
- Query with Missing Index Request – 733 servers affected. I don’t think this is just about Clippy’s bizarre suggestions, but it’s also about third party vendor apps where folks can’t touch the indexes.
- Compilation Timeout – 724 servers affected. We’re specifically warning here about queries that are in your top ~50 resource consumers.
- Joining to Table Valued Functions – 599 servers affected. SQL Server 2019’s scalar function inlining doesn’t help this, either.
- Heap Needs to Be Rebuilt – 598 servers affected. A lot of folks are using Ola Hallengren’s maintenance solution, and they’re surprised to hear that Ola purposely ignores heaps.
What I Think About All This

Now, I know you, dear reader. You have great confidence in your ability to manage your environment. You believe you’ve got everything under control, and you couldn’t possibly have the above problems on your servers, right? Right?
Except you do.
When companies hire me for consulting, they get SQL ConstantCare® for free, and the first email is usually pretty shocking. Just last week, a customer found out that SQL Server was dutifully reporting database corruption, but they didn’t know because they didn’t have corruption alerts configured correctly and no one was monitoring the suspect_pages table.
I get it: monitoring is expensive. Except it isn’t: SQL ConstantCare® is $495 a year, and that isn’t per-server: that’s one price to keep an eye on all of your servers. You’re busy. Offload some work to me.
10 Comments. Leave new
I believe you’re missing some text for #2 of the top 10 list:
“2. Queries may have non-SARGable predicates – 1,076 servers affected. This”
Great catch! Deleted “This.”
Aw I hope hoping for MORE text not less!! 😀
Very, very interesting. I immediately think of the selection bias, that these results are from SQL CC customers, likely those already paying attention or at least trying to pay attention to these details. This is hopefully not the sa password = “password” crowd.
These are also great reference points to show the…we’ll use the term “under-informed manager”…about what DB administration really entails and how good a job you are doing, or why you need more staff/newer servers in order to do a better job.
Of course the number 824 comes up on the Check For Corruption ASAP warning.
Hello everybody,
I work with Ola’s script since years. Due to the fact, that I have more than one instance of SQL Server on my machines, I expanded Ola’s Script to a solution, which backups and maintains all SQL instances on an entire machine automatically.
It is located in a separate SQL instance, that mainatains and backups all other instances.
If anyone is interested, I will share it on github.
Depending on how active you are on Github, you might even think about working with Ola to fork his repo and take it over permanently. There are a bunch of pull requests and issues over there, and he hasn’t been active in Github for months.
Hello Brent, I just opened an account on github, in order to publish the solution if sombody likes it. In Our company we work with MS TeamFoundation. So, I am not familiar with github.
Brent,
We run Index update jobs from Ola Hallengren at 4 AM. At 6 AM we clean Cache out. In theory we are getting the most up to date statistics for our queries etc. every day. We have Parameter Sniffing turned off. We are not seeing any performance issues when filling cache back up at that time in the morning. What is flawed about this process?
Bill – I wish I could do free personal consulting for everyone, but that’s kinda beyond what I can do quickly. If you’re a SQL ConstantCare customer, shoot us an email. Thanks!