At #SQLbits, Simon Sabin (Twitter) talked about car crash queries: queries that suddenly have bad performance out of nowhere, yet if you copy/paste them into SQL Server Management Studio, they run quickly. The culprit is often parameter sniffing: the engine looks at the query and builds a plan, but if the parameters in the WHERE clause (or joins or whatever) change that plan may not work right anymore.
Simon likened it to getting walking directions. If you ask for directions from Big Ben to the Millennium Wheel, your best option is walking. But if you decide to go from Big Ben to the Eiffel Tower,walking is no longer the best option – because the statistics of the distance you’re trying to cover is suddenly different. SQL Server’s statistics about the quantity and distribution of the data in your database help it determine how to build the best directions.
Scaling up to higher numbers of users also changes how queries perform. If one query returns in 250 milliseconds, that doesn’t mean that 100 of those queries running simultaneously may not all finish in 250 milliseconds each. He likened it to a grocery checkout line – the more people you put in line, the more the folks at the end of the line are going to have to wait.
A great example for parameter sniffing troubles is when the same query is run in both a stored procedure, and standalone. It may perform poorly inside the stored proc, but when the DBA tries to troubleshoot it, it performs fine. The two queries are using different execution plans. Simon demoed how to compare the two plans, and how to find out why the bad plan isn’t optimal.
One way to get better plans is to get better statistics, and you have to do better than SQL Server’s default statistics. SQL creates its own statistics for new tables, but they don’t work terribly well. If you’re using table variables, those don’t get statistics at all, but on the plus side, they won’t cause recompiles when the data distributions change. Temp tables, on the other hand, get statistics, but they incur recompiles when SQL Server sees that the temp table contents have changed dramatically and need statistics updates.
He demoed how SQL Server will create different query plans based on the exact text of the query. If one query includes comments and another doesn’t, they can end up with different query plans and different performance.
In order to catch these problems, you have to monitor and baseline your application, then identify changes in reads, writes, and CPU time. You don’t want to monitor duration, because you won’t catch things until it’s too late.