Announcing sp_BlitzCache v2.5.0

Woohoo, it’s another half of the year which means it’s time for another version of sp_BlitzCache®. The last version came out on June 18th, and it’s been about 4 months. In that time frame, you have requested some new features, and they’ve slowly trickled into sp_BlitzCache®.

If you don’t want to wait, head over to the download page and get yourself an sp_BlitzCache®!

In no particular order:

Default Values

Raul Gonzalez pointed out that if you set all of the parameters to NULL, sp_BlitzCache® will fail. This happens by default when you script execution through SSMS. Why SSMS does this is beyond me, but there you have it. sp_BlitzCache® will now get really upset and fail to run with a nice error message if you set any of the following to null:

  • @top
  • @sort_order
  • @query_filter
  • @reanalyze

Yay, errors!

Expert Mode

You guys are smart. I mean really smart. To help you keep being smart, I added a bunch of columns to expert mode. Ready?

  • Compilation memory
  • Compilation CPU time
  • Compilation wall clock time
  • Query hash and query plan hash
  • Cached plan size
  • Serial desired and required memory

Lean Options

sp_BlitzCache® used to pull back a lot of data by default. I mean… a lot of data. This could cause execution time to go through the roof on busy systems and systems with big plans. The default value for @top has been reduced to 10. You can still set it back to whatever you want, but just know that you can change it back to the way it was before.

There’s also an option to @skip_analysis. This bypasses the XML shredding that goes on in the execution plans. When you skip the XML analysis, you won’t find out if you have plan warnings or missing indexes, but your executions of sp_BlitzCache® will be much faster and you’ll use less CPU. Use this on heavily resource constrained systems.

Speaking of XML – The XML checks were re-written to be smarter and require less CPU time. We still have to scan the CPU, but I’ve re-written the XQuery in ways that shouldn’t need anywhere near as much CPU time as it used to. The upside is that this also makes it easier to add more checks to sp_BlitzCache®.

Sorting Results

There are two big changes to how results are sorted. One for speed and one for functionality.

An additional sort order option has been added for recent compilation. If you want to sort by recent compiles, just run EXEC sp_BlitzCache @sort_order = ‘compiles’

Some sort orders have always been slow (I’m looking at you “executions per minute”). New changes have pushed the sort earlier int he query and, in testing, these changes make performance faster, especially on busy systems.

Making Big Numbers Smaller

There was a potential for the totals temporary table to have integer overflows. I fixed that.

Basically, there was a significant chance that you might run into problems where you’ve burned through more than 600 hours of CPU time (that’s 2,147,483,647 milliseconds BTW). To help you avoid errors when you’re actually using your CPUs, I added a bunch of math to make numbers smaller. No, really, that’s about all that I did.

Previous Post
In Review: SQL Server 2005 Waits and Queues
Next Post
Free SQL Server Magnetic Poetry Kits

14 Comments. Leave new

  • Michel Zehnder
    October 23, 2015 8:26 am

    Hi Jeremiah

    Thanks for that – good stuff!
    The link that gets sent out is however still pointing to the 20151006 version which includes sp_BlitzCache 2.4.6.

    Cheers
    Michel

    Reply
  • Thanks for all you guys do to help the community. I, for one, greatly appreciate zee tools you guys give us to make life easier to find ways to make it harder. Thanks! =)

    Reply
  • Great thanks! Awesome work you do, guys…

    Reply
  • I realize Jeremiah has ‘retired’, but this still seems like the appropriate place to post:

    We have a dog of a server, and when running sp_blitzcache @skip_analysis=1, @database_name=’SNB02′,@top=30

    I received within ssms “An error occurred while executing batch. Error message is: Exception of type ‘System.OutOfMemoryException’ was thrown.”

    As I understand it, that is an ssms error (my pc) and not a server error. I had an earlier version of BlitzCache that was not completing, and I figured the upgrade in 2.5.0 with ‘skip_analysis’ would ease the load. THat seems not to be the case. Any advice on parameters I can select to get it to finish? thx

    Reply
    • David – it’s tough for us to troubleshoot SSMS errors on your PC, unfortunately. You could try remote desktopping into another server with more memory, ideally a development server, and running it there.

      Reply
  • I have been getting the error below in the Messages tab of SSMS for awhile. I still get results back though. Line 117 is part of the comments so not sure what the problem truly is.

    Executing sp_BlitzCache v2.5.0
    Setting up temporary tables for sp_BlitzCache
    Determining SQL Server version.
    Creating dynamic SQL based on SQL Server version.
    Adding SQL to collect trigger stats.
    Collecting execution plan information.

    Msg 8115, Level 16, State 2, Line 117
    Arithmetic overflow error converting expression to data type int.
    The statement has been terminated.

    Computing CPU, duration, read, and write metrics
    Warning: Null value is eliminated by an aggregate or other SET operation.
    Checking for downlevel cardinality estimators being used on SQL Server 2014.
    Warning: Null value is eliminated by an aggregate or other SET operation.
    Checking for query level SQL Server issues.
    Checking for forced parameterization and cursors.
    Populating Warnings column
    Building query plan summary data.
    Displaying analysis of plan cache.

    Reply
  • Thanks Jeremiah for that …. it is really useful 🙂

    A suggestion if I can … wouldn’t this be better if it also returns the wait type generate be each query?

    Thanks again and always 🙂

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}