sp_BlitzIndex Pro Tips for Consultants & Contractors

sp_BlitzIndex analyzes the design & health of your clustered & nonclustered rowstore indexes and gives you suggestions on how to get higher performance without changing your queries.

If you run it with no parameters, you get advice on the indexes in the current database:

Click to zoom

You get warnings like:

  • High value missing index – it tells you what table & columns to consider indexing, how many times the index would have been used, and how many indexes already exist on the table. That’s useful for quickly deciding whether you should consider adding that index as-is, or whether you might need to review the other indexes that exist on the table already, and maybe de-duplicate them.
  • Borderline duplicate indexes: in that screenshot, the Badges table has two indexes that both start with UserId. One of them can be flat out dropped because it’s a narrower subset of the wider index, and it’s not getting any reads anyway. Dropping it will cut 27M rows and 380MB from the database, making your maintenance jobs faster, too.

The defaults are a great start, but if you’re a consultant or contractor, you can do way better with a few parameters.

How to list all of the indexes in all databases:
sp_BlitzIndex @Mode = 2, @GetAllDatabases = 1

When I want to work on a client’s indexes, but I don’t want to be connected directly to their server, this is a lifesaver. I can dump a complete inventory of their indexes out into a spreadsheet, and then I have all kinds of details about the indexes that already exist:

Click to embiggen

Earlier, when I said the “high value missing index” warnings meant I may need to add a new index to a table – well, I need to find out what indexes already exist on that table. Armed with the contents of this inventory, I can answer that question without being connected to the client’s server.

There are dozens of columns in this, but here are some of my favorites:

  • Rows, Reserved MB – I sort these in descending order and ask the client, “Were you aware that you’ve got 75GB in this table called Sales_Backup, and 50GB in a table called WebSiteClickLog? Should we maybe clean those out and cut our backup & restore times?” Every single time, folks are just shocked about what they’ve got left lying around in the database.
  • Total Reads, User Updates – when I put this into Excel, I’ll filter for Total Reads = 0, and then sort by User Updates descending. This helps me identify the indexes I need to drop quickly in order to reduce blocking problems. Speaking of which…
  • Row Lock Wait ms, Page Lock Wait ms – along with averages on these, too, which makes it really easy to spot where my blocking is happening. When combined with the Aggressive Indexes warning, it’s a piece of cake to solve a client’s blocking issues just armed with the contents of this sp_BlitzIndex output.
  • Page IO Latch Wait ms – ever wonder which tables are getting scanned on disk and not cached in RAM? Well, now you know.
  • Drop TSQL, Create TSQL – when you need to build a list of commands to run at the client, this gets you a head start, plus helps you build the undo script to back out your changes.

You can also use the @OutputTableName, @OutputSchemaName, and @OutputTableName to write the @Mode = 2 results into a table. This is useful if you need to track index utilization over time to see whether an index is really getting used, or whether it should be dropped. (For example, some of my clients run this daily in an Agent job, and that way they can check index usage even if a server has been rebooted.)

How to get all missing indexes in all databases:
sp_BlitzIndex @Mode = 3, @GetAllDatabases = 1

Clippy’s missing index suggestions aren’t great, but they’re sure better than trying to come up with ideas yourself when you’re on a time crunch. Exporting them all out makes it easier for you to do performance tuning disconnected, too:

Click to anti-shrink

Note that at the top, you also get the server’s days of uptime. This way, you can be sure that what you’re looking at is safe: you wouldn’t want to base your missing index recommendations on a server that’s only been up briefly, or one that’s been up too long (since the queries might have only run weeks ago, not in the current app build.)

Brent Ozar's Consultant Toolkit

How to get all this in Excel:
The Consultant Toolkit

I don’t have time to connect to a client’s server, run a bunch of diagnostic scripts, and copy/paste the data into Excel. That’s why we built The Consultant Toolkit: it does all that for you.

I simply have my clients run it – client-friendly instructions are included – and then send me the output zip file that it creates. (You can even have it automatically upload the zip file to your Amazon S3 bucket if you like.) Then, I’ve got:

  • An Excel file with all kinds of diagnostics, including sp_Blitz, sp_BlitzCache, sp_Blitz, wait stats, deadlocks, and more
  • Execution plans
  • Deadlock graphs

And I can start getting to work making their server faster without even connecting to it. This helps me deliver more value to clients, more quickly. Here’s how it works:

This week, it’s a lower cost to you, too: add it to your cart and then use coupon code StartConsulting for 75% off your first year, bringing it down to just $123.75. European Union customers can buy here, and the coupon code works there too.

Previous Post
Explaining SQL Server Statistics with Playing Cards
Next Post
sp_Blitz Pro Tips for Managing Multiple Servers Centrally

12 Comments. Leave new

  • Now if Brent can tell us how to build a successful consulting business I would be set 😉

    Reply
  • “Every single time, folks are just shocked about what they’ve got left lying around in the database.”

    Almost every application I’ve worked on had at least one logging table (ChangeLog, ErrorLog, TransactionHistory, etc.) some developer set up without creating any purge mechanism, so it could grow without bound until it created some major problem.

    After I pointed this out in one system, the implemented “fix” was to copy the contents to a dated copy of the table (TableNameCopy20200818, for example), then truncate the main table. The number of dated copies was allowed to grow without bound! This made it through a code review without comment and everyone thought it was great until I found it in production, They then had to deploy a second fix (to drop those “copy” tables with a date part in the name that was older than n number of days ago).

    Reply
  • Hi Brent, I would like to know if you have tools with the consultant toolkit to help analyse or making decisions for a version of a database installed on multiples sites with results sent on Amazon. A kind of BI analysis for general trends, because, of course, data size, distribution or usage vary between customers.

    Reply
  • This is great!
    Re your Page IO Latch Wait ms comment – tables scanned to disk but not cached in RAM – what are the implications… not enough RAM?

    Reply
    • Thanks, glad you like it. No, unfortunately it’s a lot more complex than that – if you’ve got my Live Class Season Pass, check out the Mastering Server Tuning module on how to fix PAGEIOLATCH waits.

      Reply
      • Ah gutted – can’t get that as I’m in the EU.

        Oh well, given me something to try to research as I’m surprised how many of my indexes are apparently not being cached.

        Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}