Stabilizing Execution Plans: Plan Guides and NORECOMPUTE

Plan Guides are like duct tape
What could go wrong?

Sometimes you end up in a good plan / bad plan situation: an important query runs just fine most of the time. The query is parameterized, a good execution plan gets re-used, everything is cool.

But sometimes, a “bad plan” gets compiled and starts to be reused. This is “bad” parameter sniffing. “Bad plans” can come in a few varieties: maybe it’s slow some parameter combinations and can cause page timeouts sometimes. Maybe the “bad” query plan has a very large workspace memory grant that just isn’t needed, and it causes problems because lots of different queries are using it — then you get all sorts of nasty Resource Semaphore waits and everything gets slow.

Whatever the situation is, sometimes you want to stabilize a particular execution plan that’s “good” for all the different parameters that the query can run with.

Option 1: Change the code

The very best option is changing the code so you don’t have to resort to shenanigans behind the scenes. You can rewrite the TSQL, change indexes, or use hints to get a specific plan. But sometimes this is difficult to do: maybe it’s vendor code you can’t change. Maybe there’s a long code release process and it will take a very long time to get the code changed.

If you can tune the code, absolutely do it. If you can’t, at least get the request to fix the code noted by the vendor or software development team. Don’t skip it altogether, because the options I describe below aren’t all that fantastic.

Option 2: Plan guide that thing

Plan guides are like duct tape: it’s something you want to have on hand for emergency quick fixes, but you don’t want to rely on it long term as a building material. It’s also not suited for every kind of fix.

Plan guides let you do a few things:

  • Apply query hints like “optmize for value”, “optimize for unknown”, “recompile”, and “maxdop” to a query
  • Selectively turn on trace flags for a query, such as TF 4199 (performance optimizer changes), TF 9481(Older cost-based optimizer if running on SQL Server 2014), TF 2312 (newer cost-based optimizer if running on SQL Server 2014)
  • Add selected table hints, like forceseek and specific index hints. (You cannot add NOLOCK or change isolation levels, don’t get too excited.)
  • “Template” plan guides let you “force parameterize” a specific query, or enforce “simple parameterization” for a certain query if the database is using forced parameterization

But duct tape isn’t perfect. Here’s the biggest gotchas that I’ve found with plan guides:

  • I’ve found that trying to use an index hint in a plan guide can cause queries to silently fail. That’s awkward.
  • Plan guides don’t work with all types of queries. I haven’t been able to get them to work with temporary tables or table variables referenced in the query, for example.
  • Making sure that a plan guide is working and is picking up your query is tricky. Plan guides are very picky about matching query text exactly.
  • Plan guides can make code releases fail. If you’ve got a plan guide that references a stored procedure and something goes to alter it, SQL Server’s going to stop the ALTER with an error.

My biggest advice for plan guides: test them out on a non production system first. Verify that the plan guide is working and that the queries are doing exactly what you want before deploying to production. Treat the plan guide like real code as much as possible: put it into all environments, check it into source, use change control, and document it.

Option 3: Manually Force the “Right” Parameter Sniffing

If you can’t change the code and a plan guide doesn’t work, you can get a little creative. (By “get a little creative”, I mean that everything I’m about to describe can go horribly wrong.)

First, find the bad plan in cache. In SQL Server 2008 and higher, we get a nifty plan_hash for query plans. You can identify the “bad plan hash” that you don’t want to keep in cache. When it’s in cache, you then:

  1. Remove it from the cache. You can do this using DBCC FREEPROCCACHE and the plan_handle value (you can get this by running: sp_BlitzCache @results=’expert’). Or if it’s a stored procedure, you can use the sp_recompile procedure against the stored procedure to blow away the plan. (Thanks @DBArgenis for this tip!)
  2. Run a version of the query that puts the plan you want to be in cache. Usually this means running a version of the query with the parameters that give you the “right” plan.

You’ve got the burden of doing a some testing. Be careful with how you remove plans from cache: sp_recompile requires a schema level (exclusive) lock, so I don’t recommend running that against tables.

You need to generate the plan in a way that’s going to be re-used and make sure it works. You need to make sure that the plan you’re putting in cache really is good for re-use!

Stabilizing execution plans with NORECOMPUTE

NORECOMPUTE - flimsy tape
NORECOMPUTE – flimsy tape

If this doesn’t sound quite crazy enough for you, you can go a little farther and try to increase the chances of your “good” plan staying in cache longer. One tool you can use for this is NORECOMPUTE.

When you update statistics on a table with NORECOMPUTE, you tell SQL Server not to automatically update statistics on the table as data changes. This will automatically happen when approximately 20% of the rows in the table have been modified (the algorithm is more complicated, but 20% is pretty easy to remember). Updated statistics will cause the optimizer to consider a new execution plan for your query. So NORECOMPUTE reduces the chance of the “good” query being bumped out.

If you use NORECOMPUTE, be aware that this could have a negative effect on some queries and cause them to get a terrible estimate on queries that they’re running. You probably want to manually update statistics for the table at least once a day if data changes in it. You can do this using a built in command like sp_updatestats, custom code you write yourself, or Ola Hallengren’s free index maintenance scripts (see Examples B & D for two options). Just please don’t do it with a maintenance plan.

Document the heck out of this. It’s easy for people to have no idea it’s in place, or find it and change it without knowing what it impacts. If plan guides are duct tape, this is more like Scotch Tape.

These Are Emergency Fixes – Don’t Start Your Performance Tuning Process With These Techniques

A lot of times you don’t need any of this crazy stuff. Remember: many times you can fix these issues with good indexing or simple code changes. Safety first!

Learn More in Our Training

Our training class explains how to get an execution plan, how to read the operators, and learn solutions to common query problems.

Previous Post
Oracle Terminology for the SQL Server DBA
Next Post
Backing Up an Oracle Database

14 Comments. Leave new

  • What crazy flavor of duct tape is declaring the parameters as variables then using the variables?

    Reply
  • Nic Neufeld
    July 14, 2014 7:59 am

    We have another duct-tape fix that we have used for a couple issues like this, on a vendor database where we can’t really touch code. The stored procs are simple selects, so we are free to run them all day long, but occasionally it will get a bad plan and stick with it, bringing the application shuddering to its knees. Very frequently executed so OPTION (RECOMPILE) is not ideal. Well, NONE of this is ideal, I should say, but what we do is, our index and stats maintenance jobs finish up early in the morning, and then we tack on a final step to the job, to first issue a sp_recompile against the proc, and then immediately execute it with “known good” parameters. Downside is, data changes, and someday the “known good” params could create an inefficient plan. Which is why we have to continuously monitor and refine.

    Reply
  • Recently I tried creating a plan guide for one of our queries from a Peoplesoft application. The application uses API cursors and I have found that they are not parameterizing the SQL queries that they run. The result is cache bloat issues and I am trying to avoid those.

    I want to create plan guides to be able to forced parameterization at individual query level for only one query.

    The problem is that when I run EXEC sp_get_query_template it does not parameterize all the variable values in the query. The issue seems to be with a LIKE operator being used on one of the fields in WHERE clause.

    here’s an example:

    DECLARE @stmt nvarchar(max);
    DECLARE @params nvarchar(max);
    EXEC sp_get_query_template
    N’SELECT ‘select a,b, c from XYZ where a= 123 and c like ”%abc%” ‘,
    @stmt OUTPUT, @params OUTPUT;
    select @params,@stmt

    @params gives only one parameter i.e. @0 numeric (38,0)

    The moment I change the LIKE operator to “=”, I get two parameters. i.e. @0 numeric (38,0) and @1 varchar(8000)

    Is there anything specific to be done when using LIKE operators.

    Reply
    • This is a great question. First, the question is just terrific, but also you gave sample code! Always appreciated.

      So, some bad news. You’re hitting a limit not just of plan guides, but of forced parameterization itself. In other words, flipping the database property and setting parameterization to forced won’t solve your problem, either. This is documented in the second list of bullets here: http://technet.microsoft.com/en-us/library/ms175037%28v=sql.105%29.aspx

      I’m kinda scratching my head trying to think of a workaround. A plan guide to freeze a specific plan won’t help, because it has to match the text (and the text will vary, because it’s not parameterized). I haven’t thought of a workaround yet, but if I do I’ll come back and update– and maybe another reader will have an idea, too!

      Reply
    • Chuck Lathrope
      January 9, 2015 7:02 pm

      I would look into turning the “optimize for ad-hoc workloads” on your server so the plan cache bloat isn’t so high. Only plans that get used more than once get stored.

      Reply
      • Kendra Little
        January 9, 2015 7:04 pm

        Yeah. You’d still get the CPU burn from the compile action, but might cut down on the clutter from single use plans. I’m not always a huge fan of that setting, but you make a good point.

        Reply
  • Thanks Kendra for that link. I appreciate you looking for alternate solutions.

    Reply
  • James Anderson
    November 4, 2015 7:13 am

    After reading this I checked a plan guide I have with the fn_validate_plan_guide function and found it was returning an error message for my plan. The error was that my temp table was an invalid object. You mention this issue with plan guides and temp tables in your article. The weird thing is that the plan guide is actually working. I have varified this by disabling and re-enabling the plan guide. I only see merge joins in the plan of my query when the plan guide is enabled. Is it there a problem with the fn_validate_plan_guide function?

    Reply
    • Oh, interesting. One way to confirm 100% that it’s using the plan guide is to run the query, get the actual plan, right click on the select icon in the plan and look at the properties. It’ll say if it used the plan guide specifically (name and everything), as long as it’s not a “template” plan guide.

      If that’s the case then it does seem that there’s a bug with fn_validate_plan_guide. Interested to hear more!

      Reply
  • preguntoncojonero
    January 30, 2017 1:41 am

    How detect bad performance for a sql query in SQL Server 2014, that it has good performance in SQL Server 2008 R2? Changes in SQL Server 2014 about execution plans?

    Reply
  • Hello,
    This is not a question, but a share of experience… ^^
    i have tables with billions of data (sql 2014), I use partitions and incremental statistics… At one point, I get sql to sniff everything and always scanning my table, unless I specify option (optimize for unknown) and in most case cast the parameters in the where clause in order to get it look for partitions… I still didn’f find another way to do it properly without those “hints”.
    The other Strange effect I have is that even after an incremental statistic update on the partition I will use to work just after, sql seems to not care about the numbers in my partition and ends up with a big difference between estimates and the reality.
    Have a great day.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.