Which sp_configure Options Clear the Plan Cache?

When you change MAXDOP and run RECONFIGURE, SQL Server clears the plan cache. (It’s documented, by the way, as is the rest of the behavior I’m about to show you, but sometimes I like to learn the hard way by running experiments.) Here’s the easiest way to see it – but make sure to only do this on a development server, not in production:

When you check again, you’ll see that your plan cache has gotten the Men in Black treatment.

DBCC FREEPROCCACHE

It makes sense that changing parallelism from single-threaded (1) to any other option would require building plans anew, but you might be surprised to see that any change causes it to blow the plan cache – even from, say, 4 to 8.

Or even if you change it from 4 to, uh, 4.

There’s no logic built into RECONFIGURE – it just flat out blows the plan cache if you issue any sp_configure command to alter MAXDOP, regardless of whether the value was changed or not. (Remember that – it’s going to come in handy here in a second.)

So what about other options? To see the full list, check the sys.configurations table:

sys.configurations

Dang, there’s dozens! Clearly, I’m not going to run each of these by hand to write a blog post, dear reader. To the power of dynamic SQL! Knowing that a non-change still blows the plan cache, let’s write dynamic SQL that will:

  • Put a query in the plan cache
  • Run sp_configure with one of the configuration values, and its maximum value
  • Run reconfigure
  • Check to see if the plan cache was cleared
  • Run sp_configure again with the original value to set it back

DO NOT RUN THIS ON A SERVER YOU CARE ABOUT.

But here it is in case you wanna poke holes in my logic, because I’m pretty sure there’s some:

The end result:

sp_configuration options that clear the plan cache

Voila – these are options should probably only be changed during a maintenance window, and monitor for parameter sniffing issues afterwards.

Previous Post
Live Query Plans and Blocking
Next Post
How to Hire a Junior DBA

6 Comments. Leave new

  • What SQL Server Version and Edition did you run this on?

    I’m getting slightly different results for 2008 R2 (SP3) – Standard.
    (I also had to disable the Affinity Mask settings as SQL didn’t like a mask of 2147483647…)

    access check cache bucket count
    access check cache quota
    cost threshold for parallelism
    cross db ownership chaining
    index create memory (KB)
    max degree of parallelism
    max server memory (MB)
    max text repl size (B)
    min memory per query (KB)
    query governor cost limit
    query wait (s)
    remote query timeout (s)

    Reply
  • A bunch of smart people have told me to be very paranoid about clearing the plan cache. Having never been burned by this, these concerns seem overblown:
    1) Yes, clearing the cache might cause different, worse query plans to show up. But that could just as easily happen due to an automatic statistics update. If a query is in danger of switching to a bad plan, that danger exists even without clearing the plan cache.
    2) Yes, recompiling a query plan has a cost, but that cost is tiny compared to the cost of executing plans. In a data warehouse, I would expect compilation cost to be tiny compared to query cost, even for a single query (similarly, data import CPU costs are usually driven more by the amount of data being moved, rather than generating the plan itself). In an OLTP database, I would expect that most of the queries are cheap to compile, cheap to run, and are run frequently…but compilation only has to happen once per distinct query.

    Unless your database is constantly bumping up against a CPU ceiling (i.e., not even enough head room to deal with micro-variation in demand), I would think freeing the procedure cache would just be a blip, lasting for under a second and being indistinguishable from blips caused by variance in demand.

    What kind of extreme databases are people running where this is anything more than a minor footnote?

    Reply
    • Just an example off the top of my head – let’s say you get asked to look into why this database you’ve never seen before has been running slowly over the last couple of days, but it’s not slow at this moment. Right off the bat, you notice that MAXDOP and Cost Threshold for Parallelism are using the factory settings and you want to change them based on the server specs. If you just go ahead and update them at the start, you lose the plan cache and with it all the related data you could have looked at for query performance.

      (And yes, I spend a lot of time in sp_BlitzCache, often on databases I’ve never seen before.)

      Reply
      • Wayne, you make an excellent point. I was primarily responding to the final sentence, where Brent warned that the configuration changes should be made during a maintenance window because the procedure cache would reset (rather than warning that changes can be made whenever, but performance statistics should be collected before they’re inadvertently lost due recompiles).

        Personally, I have a tendency to (ab)use the procedure cache to collect simple database statistics, so the idea that making a change would reset such statistics is not very concerning; I’d usually want to compare stats before/after a change myself, so I’d probably have reset the procedure cache myself (and, obviously, all of the changes that reset the procedure cache are changes that tend to have significant performance impact).

        Reply
  • Microsoft has a few more options documented that they say will wipe the plan cache, including Max/Min Server Memory:
    https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-freeproccache-transact-sql?view=sql-server-2017

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.