RECOMPILE Hints and Execution Plan Caching

The DBA version of an XKCD classic.
The DBA version of an XKCD classic.

When you identify that parameter sniffing is a problem, you need to test whether implementing ‘recompile’ hints will help plan quality. (Not sure what parameter sniffing is? Learn from this blog post or this 50 minute free video.) You must decide: what hint or command do you use, and where do you put it? What trade-offs are you making when it comes to being able to performance tune your SQL Server in the future?

This post runs through common options you have to nudge (or whack) SQL Server into generating a fresh execution plan. I’ll give some pros and cons for each method and explain what’s useful and what’s worth avoiding.

To keep things relatively simple, I’m just discussing how this applies to stored procedures today–this post doesn’t cover other forms of parameterized (or non parameterized) queries. If you’ve got big questions in those areas, feel free to suggest it for a future post in the comments.

Disclaimer: Recompile hints can kill your performance by lighting your CPUs on fire when used incorrectly. Handle with care!

Useful Dynamic Management Views

When I talk about impact on the execution plan cache, I’ll refer to two DMVs:

  • sys.dm_exec_query_stats – This DMV is helpful to see the top statements on your SQL Server, regardless of whether they’re part of a procedure or not. Check out a sample query here.
  • sys.dm_exec_procedure_stats- This DMV is only available in SQL Server 2008 and higher. It tracks execution metrics for stored procedures. Check out a sample query in Books Online.

For both of these DMVs, having an execution plan in the cache is linked to being able to see execution metrics: number of total executions, total and average CPU, logical reads, etc. When an execution plan is removed from the cache due to recompilation, memory pressure, restart, or other actions, the related execution metrics are removed as well.

Executing a procedure with a runtime recompile hint

One option that I love for quick and easy testing is the ability to call a stored procedure with a recompile hint at execution time. This is great because you don’t have to change any compiled code. Your hint also just applies to what you’re doing at runtime.

To do this, you just use syntax like this:

However, there’s a little bit of a gotcha. If you have nested stored procedures, the recompile hint only applies to code in the outermost procedure. You can still get parameter sniffing on any calls to sub-procedures within the stored procedure itself.

So while I do like this sometimes for testing, if you have any nesting in your stored procedures it may not help you get a truly fresh execution plan.

Using the sp_recompile System Stored Procedure

This sometimes comes in handy if you need to intervene during a production incident if you’ve got a bad plan being used repeatedly by incoming sessions. You can force recompilation of plans by running sp_recompile against an object– most common choices are a table or a stored procedure. The command looks like this (a table in this example):

When this is run, related plans are not immediately removed from SQL Server’s execution plan cache. Instead, the magic happens the next time queries referencing the recompiled object run. At that point, existing execution statistics in sys.dm_exec_query_stats will be reset for statements in the plan.

Execution counts in sys.dm_exec_procedure_stats will not necessarily be reset at next use if you mark a table used by a stored procedure for recompilation. But if you mark the stored procedure itself for recompilation, it will be reset at the next use. (Don’t bother trying to memorize this, just read the next paragraph.)

There’s big downsides with this one. This command requires high permission — the user running it requires ‘alter’ permissions on the table. It also requires a high level of lock to complete. On busy systems I’ve seen this command take part in some nasty blocking chains. Be careful with it!

While this can occasionally be useful for troubleshooting and testing, don’t make it part of production code.

RECOMPILE hints in stored procedure headers

When you’ve got procedures that you want to generate fresh plans, you start looking at how to use hints within the procedure. The option most folks discover first is to use ‘WITH RECOMPILE’ in the header of the stored procedure, like this:

Using RECOMPILE in the stored procedure header is pretty drastic — the procedure won’t cache an execution plan when it runs. This means:

  • No execution plans in cache to review
  • No execution stats recorded in sys.dm_exec_query_stats
  • No execution stats recorded in sys.dm_exec_procedure_stats

Wow, that’s a bummer. If you need to identify the impact this procedure is having on your server, you have to run some sort of trace or extended events session and harvest and interpret the results. That’s not quick, and running traces can impact performance.

Kind of a big negative. For that reason, I don’t like this choice much at all.

RECOMPILE hints on individual statements

This option is a bit more work, but it has a much better payoff over time. With this option you take the RECOMPILE hint and apply it only to statements in the stored procedure where you’d like to implement the recompile, like this:

Even if you need to use the hint on all the statements in the procedure rather than in the header, this still has benefits! By putting the recompile hint at the statement level in the proc, you magically get:

  • Limited execution plans in cache to review (last execution)
  • Limited execution stats recorded in sys.dm_exec_query_stats. You’ll only get stats for the last execution, but the plan_generation_num column will increment, at least giving you insight that something is up.
  • Execution stats recorded in sys.dm_exec_procedure_stats

The fact that you do get some information in these DMVs can be super useful over time. Applying recompile hints only to the statements that need them is also just more responsible– it lowers your risk of burning yourself up over time with CPU burn.

DBCC FREEPROCCACHE – the nuclear option

This isn’t strictly a recompile hint– but it certainly does cause recompilation. SQL Server has a command you can use to tell it, “Start anew with fresh execution plans”. It’s usually used like this:

This command makes everything start fresh– for every query:

  • CPU usage may go up as new plans are compiled
  • Execution stats are cleared in sys.dm_exec_query_stats (immediately)
  • Execution stats are cleared in sys.dm_exec_procedure_stats (immediately)

This command isn’t evil– there’s a time and a place in troubleshooting when using it can help you get to the root cause of a performance problem quickly. However, since it impacts the whole cache and can also impact performance, you must be very careful using it. I do NOT recommend using this command in any regular jobs, automated processes, or production code.

As of SQL Server 2008, you can use this command a bit more gently– you can remove a specific plan from the cache using the “plan handle” or “sql handle”– but of course you’ve got to figure out what that handle is and why you want to remove it. This could come in useful in some niche situations, but in practice it doesn’t come up much. You can also clear a resource governor pool, but, well you’d have to be using resource governor.

What’s it all mean?

When you’re testing, there’s a place in your toolkit for both ‘EXEC procedure WITH RECOMPILE’ and the sp_recompile procedure. Use them with care and beware of possible repercussions, particularly with sp_recompile.

When you’re implementing hints in code, don’t have RECOMPILE blinders on– you can use ‘OPTIMIZE FOR’ hints sometimes successfully as well (although sometimes you’re optimizing for mediocre). And at times, dynamic sql can also help you out.

But if you do use RECOMPILE hints, please keep your recompiles at the statement level– and not in the headers of your procedures.

To learn more, check out our Fundamentals of Parameter Sniffing class.

Previous Post
The Four Answers To “How Big Is Your SQL Server?”
Next Post
Auto-Scaling SQL Server Always On Availability Groups with Virtualization

34 Comments. Leave new

  • This post is a great resource. It is a thorough and well-reasoned treatment of this subject. You only need one such article on the entire web to cover the topic.

    Reply
    • I agree this covers the topic very well. In the interest of time, and because my servers are rarely CPU bound, I’ve often used the recompile at the procedure level. For me that has always worked just fine. Another great follow-up is – why are the plans getting so out of whack in the first place, and how to refactor those procedures/statements so you don’t need to force a recompile? That’s a pretty interesting topic.

      Reply
      • “why are the plans getting so out of whack in the first place” –
        I can give you one example that we constantly run into: a table needs to be purged every night and start empty and near empty for the next day. the initial query plan is always table search and you can’t even force it to index search. If you want it to change after 10 records, not to wait till it hits 500 records, an explicit recompile is an easy solution.

        Reply
  • I liked the article since it collects all possible use of Recompile hint for developers experiencing problems with their query performances

    Reply
    • Just be sure to involve your DBA before you use recompile hints in your statements/procedures. I recently discovered that a whole host of very, very active stored procedures and table valued functions that had the OPTION RECOMPILE hint in every query – the developer thought that was necessary based on some literature he misinterpreted around functions. As a result, the CPU rose to 90+% during core hours. A trace revealed the very high volume of recompiles and after I removed them (along with disabling auto-update stats on a couple heavily hit tables) the CPU returned to acceptable levels.

      Reply
  • Ivan Arzhentinski
    February 10, 2015 4:29 am

    I’ve created a connect entry for a trace flag, which would enable execution statistics collection for OPTION(RECOMPILE) queries.

    https://connect.microsoft.com/SQLServer/feedback/details/1123011

    Reply
  • Hi Kendra, I’ve got a vendor DB stored proc that (from all appearances) has a nonsensical “WITH RECOMPILE” on it; it runs 6 times per second, and has a very simple structure seemingly not given to bad execution plans / param sniffing issues. I’m looking for ways to quantify the pain the constant recompiles are causing (with the goal to justify having them remove the WITH RECOMPILE)…any ideas on where I might dig for this? I have a feeling we’re paying a Stupid Tax in CPU cycles recompiling this 6 times a second, but I don’t know a good way to really demonstrate that…

    Many thanks!

    Reply
    • Kendra Little
      April 30, 2015 9:47 am

      Do you have a restored copy of the database where you can run the procedure in a loop, in two different sessions, with the hint and without?

      The simplest thing in terms of showing managers is just to run it in a tight loop with the hint and show the CPU chart, and then run it without and show the CPU chart.

      Reply
      • Yes, I was thinking of doing something like that, with a load test server, running a captured replay trace (with both versions of the proc) repeatedly while monitoring CPU load. Unfortunately our load test server is booked for weeks with various projects so this one has to go to the end of the line! 🙂

        One thing I found interesting…while this proc obviously has the WITH RECOMPILE hint and is running constantly (per SP:Completed events), not logging anything to the plan cache…I don’t capture any SP:Recompile events related to this proc. I was thinking, well maybe they don’t log that when its a WITH RECOMPILE proc, just for other reasons (I do see plenty of SP:Recompiles for other reasons). But that event has a subclass for the WITH RECOMPILE option (11), so I assume it would capture these.

        The looped execution (with and without recompile) does seem like a simpler way to test. I may have to give that a go instead of the more complex replay trace…

        Reply
  • I have two procedures which cause me issues at seemingly random times. These procedures are not simple R,W,U types . The developers are putting allot of logic in the procedures. I am not able to stop this so I have to live with it. One during peak hours executes between 3000-4000 times an hour and the other between 6000-8000 times per hour. The symptom I will find is they are running long and processor hits 90-100%. Recompiling them or freeing the procedure cache will drop the processor back to normal levels. So t me it seems that mostly due to all the logic in the procedures SQL starts making bad choices for the plans. I use Solar Winds. DPA and I am able to see a history of the plans. I just don’t know how to fix this other than trying to force a rewrite which I doubt I will be able to do. I hate to use the hint because normally they execute very well. Any Advice you can give?

    Reply
    • (just my 2c as a fellow commenter) Here’s what I would consider doing in that case…

      Plan guides still scare me. Just a bit too heavy handed…although they are an option.

      I wouldn’t like to do WITH RECOMPILE in your case because they run so often….it might solve your incidents where it goes awry though, but I like actually using the plan cache, being able to see how things run historically, etc.

      What you might consider doing is pulling your bad plan and good plan and comparing them, specifically, the parameters they are compiled with. I assume the issue is parameter sensitivity causing a bad plan every now and then. Analyze the params of the good plan….make sure that the good plan would also run decently (if not ideally) with the bad plan params…and then you could do something like we do for certain procs….at the end of your index or stats maintenance job, run sp_recompile against the proc and then do a test invocation of the proc with known good parameters….this puts your plan into cache and you -should- be OK for a while.

      Ahhh, but you mention writes/updates. That might not be an option for you if you can’t run it (like a simple search proc).

      Other options…hitting it with a regular scheduled sp_recompile. I don’t love this, admittedly. But if nothing else it would limit the scope of a bad plan to likely only 5 minutes or however often you run it. It’s a less extreme option than WITH RECOMPILE because you would be doing 12 compilations per hour instead of 8000. Sort of a Band-Aid solution though.

      Reply
      • Thanks Nic those are the only options I can see as well for now. I thought about some type of logic that could detect when those procedures are not doing well and kicking off the recompile then.

        Reply
  • Yeah, you could have a job run at intervals and just query dm_exec_procedure_stats, and if beyond a threshold for logical reads (either compute average, or use last_logical_reads) kick off a sp_recompile. Or use runtime instead of reads. I’ve come close to doing that but usually end up going for the other options. Of course, there’s nothing to prevent you getting another bad plan after the recompile if you get an unlucky set of params again, but it’s pretty unlikely if you usually get good plans.

    BTW, I use DPA too (I still can’t shake calling it Ignite), great tool.

    Reply
  • Hi ,
    I need your help to clarify the compilation of SP and Function.

    My understand is SQL stored procedure gets compiled during the first execution ( but not during the creation, during the creation it checks the syntax and create the object and makes an entry in sys tables sysobjects, syscomments etc) and it saves the execution plan in the cache so that for the further execution it will use that plan (provided we are not forcing any recompilation) But in case of Functions it get compiled for each call.

    Is my understanding correct?, In some article I read both SPs and Functions compiles only once and it will use that plan for the further execution if we are not recompiling it, but some other article says as I mentioned above ( ie Function get recompiled in each call).

    Can some one let me know which is correct ?.

    Also, do we have WITH RECOMPILE option for function? as we have it in Procedure ?

    Reply
    • Hi! Rather than giving you the answer, I’d rather teach you how to get the answer. If you were going to look somewhere to see if a function (or any statement) was compiled, where would you look? Is there a way you could run a query several times, monitoring to see if it got compiled each time?

      Reply
  • Please advice the usage as per the following reference. As it suggest against your idea.

    http://stackoverflow.com/questions/20864934/option-recompile-is-always-faster-why

    Reply
  • I also tried the case of table variable and it works as per above SO post. Use of option(Recompile) at the end of storedprocedure gives the improved execution time as compared to not using it, I did try update stats but it didn’t work.

    Reply
  • I’ve just come across a situation where a statement level recompile in an SP has given a HUGE performance increase vs. an SP level recompile. I mean a factor of >10. From a minute run time down to 4 seconds. This is on 2008R2

    (This is a reporting SP being called occasionally, so the CPU cost of recompile is acceptable vs UI responsiveness)

    Wonder if anyone else has seen this?

    Reply
    • Paul – your best bet for questions like that is to paste the before & after execution plans at PasteThePlan.com, and then ask a question about them at dba.stackexchange.com.

      Reply
  • Hi Brent… I *think* I’ve worked out what is going on…

    In the proc there is statement to build a temp table, that is then consumed (twice over) by the main query.

    The way the main query is consuming the proc is the big difference – with statement level recompile, the plan is good.

    With SP level recompile (only), the temp table is consumed *horribly*. Complete with internal 500x row explosion an’ all. Yeah. A million rows becomes 500 million for a few steps, before folding back down to a million again. Ewww.

    What I’m guessing is, is that with statement recompile, the compilation will happen in the context of having stats for the temp table. Whereas in the proc recompile version, stats for the temp table won’t exist.

    Make sense?

    So… the takeaway is… if you think you need recompile on every execution, and you’ve got temp table stuff in your proc… *always* do statement level recompile for the query that’s doing the main work.

    And of course, there’s the reasons Kendra mentions above, too. So listen to Kendra. (And just this once, listen to me).

    Reply
  • Hi Brent,
    i work with SQL Server 2008R2 and the server is rebooted in a monthly basis due to Windows patching. Usually the reboot takes place in the last Sunday of the month during the morning period. However every time that happens, some critical jobs that are executed in a daily basis late at night become incredible slow taking more than the double of the time to complete. After a few days, they get back to their usual duration.
    I understand that the caches are cleared and some queries will recompile but would that make queries to take the double of the time (from usual 4 to 8 or more hours) ?
    And why they get back to the normal behavior without no intervention at all after a few days?
    I wonder what could be done to minimize (if not completely avoid) the impact of this monthly reboots.
    Thoughts?

    Reply
  • Hi Brent,

    We use sys.dm_exec_query_stats for collecting query stats (customized query hash statistics datacollector). On one server I got very strange results when I calculate execution count from several collections (I use lag function and it is resulting negative values).
    One query_hash has three plans in cache (creation time for all plans is end of june 2018, and they are still in the cache). For one plan (during one minute collection frequency) varies execution count once with count about 20000 and another time with count about 3000- source server is SQL 2008 R2.

    Could you have explanation for this behavior?
    That one plan switches between two ‘levels’ of execution count (and other cumulative values) while creation_time for plan is unchanged (during one day period it switched about 20 times).

    Reply
  • Hi Bent,

    Forgot my question, I found error in collection procedure.

    Reply
  • Difficult to understand 🙁

    Reply
  • May i know what’s the criteria sql server (2014) will automatically recompile a stored procedure which has table variables?

    Reply
  • Hi,
    We had a stored procedure containing a dynamic query with the following hints: OPTION (MAXDOP 4, RECOMPILE). This proc strangely started taking more time to execute (10 sec compared to 1 sec previously).
    Do you think the the RECOMPILE inside the dynamic query is really taken into account by the SQL engine ?

    Reply
    • Shah – yes, the recompile hint really does tell SQL Server to recompile – but that doesn’t necessarily mean that it gets a great plan, as we cover in our Mastering Query Tuning classes.

      Reply
  • How to skip recompile for a column where column level encryption is enabled.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.