New sp_Blitz® and sp_BlitzCache® Updates

SQL Server

Ah, spring, when a young man’s fancy lightly turns to thoughts of updating his DMV queries.

sp_Blitz®, our free health check stored procedure, brings several new checks and a whole crop of fixes and improvements. I’d like to call your attention to one in particular.

Julie Citro fixed check 1, THE VERY FIRST CHECK IN THE SCRIPT, which had a logic bug when looking for databases that had never been backed up. Sure, the next line had a workaround, but nobody caught this for years. All of you who ever read the source code, Julie Citro is officially a better tester than you.

sp_BlitzCache™, our performance tuning tool, has new updates too.

You can now run with @reanalyze = 1 first and sp_BlitzCache™ won’t break. You can run sp_BlitzCache™ from multiple SPIDs! If you had triggers, sp_BlitzCache™ would start to pick itself up – this has stopped.

Also, an integer overflow has been fixed – queries using more than 28,000 days of CPU since start up will no longer cause an arithmetic overflow.

You can grab the updated scripts in our First Responder Kit.


Previous Post
Basic VMware and Hyper-V Terminology for the SQL Server DBA
Next Post
Microsoft Ignite Morning Keynote Liveblog #MSIgnite

16 Comments. Leave new

  • If I have already subscribed and have been reading diligently for many weeks, is it correct to re-enter all my information as if I was signing up fresh each time an update is released?


    • Mitch – yes, the email subscriptions don’t deal with the EULA. You have to accept the EULA to get the code. If you subscribe to new-script-notifications in that EULA, then you’ll get an email with the direct link each time one comes out. (I sent that out to those subscribers this morning.) Enjoy!

      • Subcscriptions, EULAs, how do you want anyone to contribute?

        Reduce friction and put the sources of all SPs on GitHub. Start accepting PRs. Profit!

        • WQW – that sounds great, and I totally think you should go for it. Build the next great SQL Server utility script. Make me jealous with all your baller profits! 😉

          In all seriousness, that’s one of the things that makes the community great. There’s a whole array of choices – paid software, free software, open source software, you name it. You’ve got lots of options out there to choose from, and if you don’t like ours, by all means, pick another option. I wouldn’t blame you at all if you’re a contributing kind of person.

  • Bob McAusland
    May 1, 2015 5:01 am

    Thanks for all the work you guys put into these tools. With a large number of databases to maintain, I’ve found them invaluable. Keep up the great work!

  • Agreed. Any “hoops” to be jumped through are far outweighed by benefits provided otherwise free

    • Sorry for that, as a developer source control is mandatory for anything I write. Compared to GitHub current distribution method looks like e-mail harvesting effort.

      I’m not arguing the benefits for the DBA community. If it’s going to be a community effort then there are ways to reduce friction for all those having something to contribute.

      Profit, like get even more popular :-))

  • Julie Citro
    May 1, 2015 11:14 pm

    I’m famous!

  • Andrew Berry
    May 6, 2015 8:51 am


    Thanks for these scripts. They are really useful (although have given me quite a bit of work to do…..)

    I have found quite a pinickity issue with them. I’m not 100% sure it’s an issue but have a fix in for it.

    Basically I ran sp_Blitz and it reported that I had 4 Procs with the WITH RECOMPILE comman in it. But when looking into the issue, 2 of them had the WITH RECOMPILE commented out.

    So adding a simple AND ROUTINE_DEFINITION NOT LIKE N”%–WITH RECOMPILE%” solves the *issue*.

    But yeah, cracking scripts. Now to crack on to 277 Hypothetical Indexes….. 😀

    • Great, thanks, glad you like it. Unfortunately, that filter doesn’t work – you could have WITH RECOMPILE both with and without comments, and it’d fail that check. Good idea though! For future improvement suggestions, check out the documentation – it explains how to submit improvements.

  • Hi Brent!

    On this page there is a guideline on how to set the autogrow amount for the data files. Is there any reasoning behind the 256mb for data files and 128 mb for log files or is that based on the storage?

    Thank you.

    • KL – it’s a good starting point, but definitely feel free to tune it based on your own storage subsystems.

      • Thanks for the response, Brent. I always feel confident in following your advice and wanted to send it out as a recommendation to the team, so was trying to verify the sources =) Currently our autogrow is set to the default of 10% and we do not pregrow our databases.


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.