Can Forced Parameterization Go Wrong?

App Like That

If you’ve got the kind of application that sends bare-assed strings to SQL Server, you may end up with a weird choice.

Brent will be sitting on one shoulder telling you to use Forced Parameterization.

I’ll be on the other shoulder asking if you’ve really thought this whole thing through while you ignore me.

It’s cool. That’s what this post is for.

See, one potential down side of Forced Parameterization is, well, Forced Parameter Sniffing.

Live Nude Strings

Let’s take a closer look with some demos.

This’ll get us started by creating some indexes and looping over the same query, putting in different Vote Types to search on.

That finishes pretty quickly, and we can dive right into the plan cache.

We end up with a cached plan per Vote Type. Costs are all over the place.

One Off.

Some of the plans end up being the same, but here’s a comparison between the highest and lowest cost plans.


Clearly some different choices were made.

We even have different sets of warnings…

Told you eh?

Rather amusingly, even the estimated impact of the missing indexes swings all over the place from plan to plan.

It’s amusing because it’s always the same exact index definition.


The metrics are all different…

Hard To Profile

Some get memory, and one spills…

Amy Granted

The point is that each query got an “appropriate” plan. Not perfect. We have some tuning to do, I think.


I know what you’re thinking at this point: This doesn’t seem so bad. Plan cache pollution? Eh…

It gets cleared every night when you rebuild every index 500 times anyway. Who cares?

And you’re sort of right. This is a much easier problem to have than parameter sniffing.

Let’s look at how Forced Parameterization changes things.

Take Two

I’m going to do everything exactly the same, except I’m going to run this command to turn on Forced Parameterization.

Then I’m going to run the loop, and see what BlitzCache tells me.

The first thing I notice is that this is taking way longer to finish. The first loop finishes in around 20 seconds.

This one has been going on for a while. It ended up taking over a minute.

Now there’s only one plan in the cache.

You did nothing.

That’s the big plan from before. We can change things up a bit and start the loop with a different Vote Type — like one that has almost no usage.

I’ll clear the cache and start it with 15 — that’ll start us with the little plan from before.

This one runs even longer than the last loop — it takes about a minute and a half.

Most of that seems to be due to the fact that we spilled a lot more!


We also had some Nested Loops Joins run a pretty good chunk of times for the larger plans.

This doesn’t seem like a winning scenario.

There’s No Such Thing As A Free Feature

With databases in general, we often end up trading one problem for another.

Wanna fix plan cache pollution? Hope you like fixing parameter sniffing!

This is when people start doing all sorts of things that they think fix parameter sniffing, that really just disables it.

Stuff like recompile hints, optimize for unknown, local variables…

Could I tune queries and indexes to make our code less sensitive to parameter sniffing?

Of course I could, but if you don’t have that person, and no one wants to be that person, maybe spending a little more on RAM to hold your dirty, filthy plan cache ain’t such a bad trade.

Thanks for reading!

Previous Post
Thoughts On Microsoft’s Azure Outage Post-Mortem
Next Post
Should You Use the New Compatibility Modes and Cardinality Estimator?

7 Comments. Leave new

  • How about Forced Parameterization. But still get the plan cache pollution? Because we get the worst of both worlds where I’m at!

  • That’s actually one of the things we have happening. We basically have it all happening. Can’t say coming to work is boring 😀

  • In the battle of “Who to listen to” between Erik and Brent, the answer is always “What would Richie do?” Would he give Brent access to change his servers to Forced Parameterization?

  • André Cardoso
    October 10, 2018 3:53 am

    Wouldn’t the optimize for ad hoc workloads alleviate the memory pressure?

    • Andre – go ahead and read the first link in the post 😉

    • André – with really small volumes (like ad hoc workloads, where users run a few dozen queries per minute max), sure. But as you approach larger volumes, where you’re talking about hundreds or thousands of queries per second coming in, optimize for ad hoc doesn’t really help the compilations problem.


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.