Sure you can – parameter sniffing is everywhere. Anytime that you have one piece of code that needs to handle a varying number of rows, you’re probably gonna have to worry about parameter sniffing.
I’ll start with any Stack Overflow database and run a couple of non-parameterized update statements. I’m using literals here, not parameters:
1 2 3 4 5 6 7 |
UPDATE dbo.Users SET LastAccessDate = GETDATE() WHERE DisplayName = N'Brent Ozar'; /* There's only one me */ UPDATE dbo.Users SET LastAccessDate = GETDATE() WHERE DisplayName = N'Alex'; /* There are a lot of Alexes */ |
These two statements have two very different actual execution plans. The first plan, updating just Brent, is very simple. Reading from right to left, we scan the clustered index to find all the Brents, and then we update the clustered index:
Except…the simplicity is a vicious lie. We’re updating LastAccessDate, and before running this update, I’d already created half a dozen indexes:
1 2 3 4 5 6 |
CREATE INDEX Age ON dbo.Users(Age) INCLUDE (LastAccessDate); CREATE INDEX DownVotes ON dbo.Users(DownVotes) INCLUDE (LastAccessDate); CREATE INDEX Reputation ON dbo.Users(Reputation) INCLUDE (LastAccessDate); CREATE INDEX UpVotes ON dbo.Users(UpVotes) INCLUDE (LastAccessDate); CREATE INDEX Views ON dbo.Users(Views) INCLUDE (LastAccessDate); CREATE INDEX AccountId ON dbo.Users(AccountId) INCLUDE (LastAccessDate); |
Every one of ’em includes LastAccessDate. SQL Server doesn’t have the concept of asynchronous indexes, so all of the indexes have to be updated whenever we change their contents. Hover your mouse over the clustered index operator to see the truth: SQL Server’s updating those, too.
The second plan is different because there are a lot of Alexes at Stack Overflow – Alex is one of the most common names in the database. Read the second plan right to left, and the first thing SQL Server does is find & update all of the Alexes in the clustered index:
But since we’re updating a lot of rows, SQL Server changes the way it approaches updating all of these nonclustered indexes. After the clustered index operator, keep reading…
For every index, SQL Server sorts all of the Alexes in that order so we can find ’em faster in the index, and then updates the index. It also does this process one index at a time, serially, in order, for Halloween protection.
These are called narrow and wide plans.

Brent’s plan, which does all of the updates in a single Clustered Index Update operator, is called a narrow plan.
Alex’s plan, which breaks out the work for each index and does them one at a time, is called a wide plan.
No, that doesn’t make any sense to me either. They should be called short and tall plans, because frankly, Alex’s plan is the wide one. I don’t make the rules, though, Paul White does. Take it up with the Kiwi. Even when his rules are wrong, they’re still right. Or maybe they just feel that way.
Put this in a stored proc, and you’ve got parameter sniffing.
Here’s our stored procedure to handle updates:
1 2 3 4 5 |
CREATE OR ALTER PROC dbo.usp_TouchUser @DisplayName NVARCHAR(40) AS UPDATE dbo.Users SET LastAccessDate = GETDATE() WHERE DisplayName = @DisplayName; GO |
And now depending on which parameter runs first – Brent Ozar or Alex – we’ll get a different execution plan.
When Brent runs first, we get the narrow plan, so:
- Brent does 143,688 logical reads and has no worktable
- Alex does 352,260 logical reads, has no worktable, gets a 72KB memory grant, and the clustered index scan is single-threaded
And when Alex runs first, we get the wide plan, so:
- Alex does 553,530 logical reads – more than 3 times the number of pages in the entire table! – plus another 41,339 on a worktable, gets a 5MB memory grant, and the clustered index scan goes parallel (although the rest of the plan is serial because that’s how SQL Server does it)
- Brent does 143,730 logical reads, plus 17 on the worktable
The differences in this case aren’t all that bad, but keep in mind that I’m dealing with just a 1GB table – the smallest one in the Stack Overflow database. The larger your data becomes, the more you have to watch out for this kind of thing.
That’s why I teach my Fundamentals of Parameter Sniffing and Mastering Parameter Sniffing classes. Folks with a Live Class Season Pass can start watching the Instant Replays whenever they want, or drop in live. I’m teaching a bunch of classes over the next few weeks:
- Feb 1: Fundamentals of Index Tuning – iCal
- Feb 2: Fundamentals of Query Tuning – iCal
- Feb 3: Fundamentals of Parameter Sniffing – iCal
- Feb 4: How I Use the First Responder Kit – iCal
- Feb 5: Fundamentals of Columnstore – iCal
- Feb 8: Fundamentals of TempDB – iCal
- Feb 9-11: Mastering Server Tuning – iCal
- Feb 12-14: Mastering Index Tuning – iCal
- Feb 16-18: Mastering Parameter Sniffing – iCal
See you in class!
8 Comments. Leave new
Awesome explanation, thanks Brent!
Is there something I’m missing here, or… usually parameter sniffing is about getting worse performance when you use a plan that isn’t formed for that particular parameter. But in this case, both Brent and Alex seems to be doing better if it had gotten the Brent parameter first. This just looks like a SQL Server did a poor job of forming a plan for Alex in the first place.
Mqx – you’re on the right track! So now, what might happen if someone rebuilt an index, Alex ran first, and SQL Server sniffed his parameters? Would folks think that suddenly performance had gotten worse out of nowhere? That’s what parameter sniffing is.
Yeah, definitely 😛 it just felt like a different approach to things, where the “optimized” version was actually the bad one in any situation… in my experience it’s at least optimized for its own parameter in that case! Informative 🙂
Heh heh heh – yeah, in Mastering Query Tuning, I cover cases like this. I happens surprisingly often at scale!
Interesting… very interesting…
**attempting to spiritually project myself from the EU**
So what if you were to implement a “wrapper” sproc “usp_TouchUser_counter” which determines how many rows will be updated. Based on those results, it then calls one of a number of identical sprocs such as “usp_TouchUser_100_or_fewer”, “usp_TouchUser_101_to_1000”, “usp_TouchUser_1001_to_10000”, etc?
It’s convoluted, but each sproc would only have to deal with row counts in a relatively narrow range, which should minimize the effect of parameter sniffing. An individual sproc wouldn’t have to deal with counts several orders of magnitude higher or lower than its “first call” count.
I would think having 5 times the number of stored procedures in the DB would less detrimental to performance than using “option recompile” in situations like these.
Nick – yes, like I mention in the post, all the different solutions are way beyond what I can cover here, and I cover those in the classes. See you in class!