In rough order of how frequently I see ’em:
- There are different workloads running on the server (like a backup is running right now.)
- You have a different query plan due to parameter sniffing.
- The query changed, like someone did a deployment or added a field to the select list.
- You have a different query plan due to a statistics change.
- You’re getting a different share of hardware (perhaps someone changed a hardware setting, introduced a noisy neighbor VM, or you have hardware competition in the cloud.)
- Someone added or removed an index.
- Someone added or removed a trace flag or other server-level config setting.
- Your application itself changed, and it’s not able to digest the results as quickly.
- Someone patched, and it had an unexpected side effect.
- You have the same plan, but different memory grants.
- Someone’s modifying more rows at a time, so you’re hitting lock escalation.
- Your query is being throttled by Resource Governor.
- It never really was that fast to begin with.
And starting with SQL Server 2017 (if you have columnstore indexes) and SQL Server 2019 (for everyone):
- You’re getting a different memory grant due to adaptive memory grants.
- You’re getting a different join type due to the adaptive join threshold changing.
Someone Changed MAXDOP and CTOP. Just a thought
Kapil – oh that’s a good one too!
#13 definitely resonates. You hear how fast their job was in the past but now seems slow…first thing I look at is history of the job. That’s when I find out people’s perceptions are from a different dimension in space/time.
“Somebody changed isolation level from default read committed to repeatable read and your query is hitting blocking now and taking forever to finish” this also could be one addition on above list.
Great article as always.
Pranaw – yep, that’s a good one!
You just didn’t need one more thing on your plate today and this is urgent
You have changed the compatibility level of the DB, or moved the DB to a newer SQL Version
There is another (somewhat rare) case that I have seen that is related to #4. I’ve seen certain query performance regressions before where the statistics on a table need to be updated, but they have not yet hit the internal thresholds to trigger automatic stats updates. Because of this, the query plan that was previously optimized to only process a few records now has to process a few thousand records, and slowness ensues. It’s not really a parameter sensitivity issue, because the query is always run with the same parameter values, and all database/server configurations are the same.
Brilliant, your list also remind us that features keep changing and many times is depends on what version of SQL you are running to troubleshoot in the right place. Fortunately with new features also comes new tools to the rescue, like the query Store for example. I think this list is a great guide to keep ourselves aware that not all the performance issues start with looking at the query plan , although usually is a good place to start.
Did someone run the SQL Server Query Tuning Advisor and deploy every single index suggested, then shrink the database directly after for good measure?
The perennial the table/database has grown and the developers did not do volume testing
11.a: your table scan is getting blocked because oltp
Did someone store a value in XML then CROSS APPLY in a subquery to get the individual values into the recordset?
The query that is “running slowly” is actually suspended because someone left a transaction open.
Great Post! Thanks!
I’d have thought one of the most obvious answers would be: “Because yesterday the pages required to fulfil your query were already sitting in RAM, and today they weren’t”.
Closely followed by: “Because today you needed the query to run fast, and yesterday you didn’t”. OR: “Because today you ran the query in front of your boss, and yesterday you didn’t”.
The index defrag maintenance job perhaps didn’t run and the indexes are now heavily fragmented. Whatsay ??
Related to #5 – check to see if your VM moved to hardware with a different spec (example – vMotion to a system with slower CPU spec).
Data growth in tables without a clustered index that become highly fragmented. The tables can consume a large amount of drive space (due to low page utilization) and queries run increasingly slower as space increases and fragmentation also increases. Adding the missing clustered index can save huge amounts of space and fix the related performance issue. It’s good to have queries to review tables and indexes for fragmentation and as well page utilization.
Good article as always – thanks Brent!
Did Someone just switch the database compatibility level to the highest available with the expectation that the performance will always get better no matter what?