A couple of times a year, I disconnect from the Internet to sit down with a planning book, my Epic Life Quest, and my future task list. I spend time making sure my day-to-day tasks are still prioritized right. When I do that, I often find out that I’ve lost focus on a particular task, or I’ve been spending too much time on something. We only get so many hours in a day, and I gotta figure out where I can be the most effective.
For 2020, one of the things I decided was to update the First Responder Kit just once per quarter instead of once per month. At the time, I was thinking to myself, “These scripts are pretty mature. How many changes can really be left to make?” Uh, as it turns out, a lot, as you can see by this month’s changelog! So that plan is out the window – let’s keep going with monthly releases, hahaha.
Big shout out to the 9 folks who contributed code & issues for this month’s release.
To get the new version:
- Download the updated FirstResponderKit.zip
- Azure Data Studio users with the First Responder Kit extension:
ctrl/command+shift+p, First Responder Kit: Import.
- Download the updated Consultant Toolkit in your account
- EU customers: check your email for the updated version from Gumroad, our European distributor
Consultant Toolkit Changes
Updated the First Responder Kit with this month’s script updates, plus:
- Improvement: new Indexes Missing tab with the missing indexes from all databases. Same output as running sp_BlitzIndex @Mode = 3, and most of this stuff is available in the Indexes M4 tab, but when you have to do a massive index cleanup, it’s helpful to be able to slice & dice these in a table.
- Fix: sp_BlitzCache memory grant & spill sorts weren’t running on Azure SQL DB. (Thanks, Erik.)
- Improvement: added non-default database configs for enable_broker and honor_broker_priority. (#2187, thanks parlevjo2.)
- Improvement: added more-info query when we catch people running unusual DBCC commands. (#2211)
- Fix: updated end-of-support dates for unsupported builds check. (#2221, thanks Matt Saunders.)
- Fix: if you had >50 databases, you’d get the “multiple plans for one query” warning even if you just ran a single query once with sp_MSforeachdb (since it’d put a plan in cache for each query.) If you have >50 databases, we now set the threshold for this warning to be 2x the number of databases. (#2249)
- Improvement: new warning for Table Spools. (#2235, thanks Erik Darling.)
- Improvement: checkid 999 (plan cache is expiring quickly) now has a URL for more details. (#2214)
- Improvement: the warnings in the 2nd result set now exactly match the warning names in the Warnings column in the top result set, making it easier to look up warnings you don’t understand. (#2212)
- Fix: the stale-statistics check was also labeled checkid 999, so moved that to 997. (#2214)
- Fix: each time sp_BlitzCache ran, it was putting an insert-into-#ReadableDBs query into the plan cache, which made demos a little cluttered. (#2253)
- Fix: avoid plan cache pollution by avoiding the use of a random ID when building the output table name. (#2252, thanks Adrian Buckman.)
- Improvement: in @ExpertMode = 1, added these Availability Groups Perfmon counters (plus a few others) to help troubleshoot high HADR_SYNC_COMMIT waits that aren’t related to CPU or storage. I haven’t added any alerts for specific thresholds though – just showing these counters in the Perfmon section of the @ExpertMode = 1 output so you don’t have to go setting up a Perfmon session. (#2246)
- Improvement: if the high forwarded fetches check fires (#29), look in TempDB for the top 10 tables with >100 forwarded fetches, and include them in the results. I would really like to tell you more, but it turns out it’s really hard to identify what code & session is responsible for a temp table. (#2248)
- Fix: the ##WaitCategories table is no longer repopulated with every run. (#2208, thanks bgpjr.)
- Fix: error when hitting the long-running blocking check. (#2218, thanks sm8680.)
- Fix: when @ExpertMode = 1, really slow sp_BlitzWho runtimes (like for hundreds of sessions rendering across a slow network pipe) would cause sp_BlitzFirst’s samples to be shorter since the sp_BlitzWho runtime was included in the sample time. Now the clock only starts after sp_BlitzWho finishes. (#2244)
- Improvement: added @Debug parameter that outputs the working temp tables and the dynamic T-SQL used to populate them. (#2229)
- Fix: if you ran it with @GetAllDatabases = 1, @Mode = 3, it would run even if you had >50 databases. Now requires @BringThePain = 1 even for @Mode = 3. (#2255, thanks DBAdminDB.)
- Fix: non-partitioned columnstore indexes were reporting as partitioned with duplicate row counts. Root cause is multiple rows in sys.dm_db_index_operational_stats for different hobts for the same columnstore index. (#2228)
- Improvement: added OPTION (RECOMPILE) hint, and if the SQL Server has >50GB memory, and it’s on a recent patch level, we add a max memory grant hint of 1% to reduce sp_BlitzWho’s memory grant request. (#2238, thanks Tara Kizer.)
Power BI Dashboard for DBAs Changes
Thanks to the Power BI wizardry of Eugene Meidinger, there are a ton of changes and improvements in progress in the Power BI Dashboard. Consider this month a very “alpha” release: it’s not really ready for public consumption yet, because Eugene changed a lot and I need to tweak a few things to help get you started, but it’s off to a great start. I’ll do an update to it over the coming week and post a few blog posts explaining what’s new.
I highly recommend Eugene’s Pluralsight courses, including:
- Power BI Data Preparation Playbook – because seriously, a little preparation goes a long way with Power BI, as I’ve learned the hard way a couple of times.
- Deploying and Publishing Power BI Reports – because if you don’t plan for this, you’re going to end up with a bunch of users passing around PBIX files, and they’ll all be making their own edits, and you’ll have a change tracking mess.
- Leveraging Timely On-premises Data with Power BI – because the on-premises data gateway service is really useful. I use it myself with Postgres.
When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. If you need a free invite, hit SQLslack.com. Be patient – it’s staffed with volunteers who have day jobs.
When you find a bug or want something changed, read the contributing.md file.
When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes us!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.