This week I’m running a series of posts introducing the fundamentals of index tuning. Yesterday, I used Brent’s 5 and 5 Rule to explain why you want to aim for around 5 indexes per table, and 5 columns (or less) per index. Now, how do we get from our current database situation to a better place?
When someone hands me a database and tells me to make it go faster without changing the code, I start with my D.E.A.T.H. Method:
- Deduplicate – remove fully-identical and nearly-identical indexes
- Eliminate indexes that aren’t getting used
- Add indexes to make queries go faster
- Tune query plans with hand-crafted indexes for specific queries
- Heaps usually need clustered indexes
Deduping and Eliminating First
I start with Deduping and Eliminating because when I’m given a database, many of the tables have a lot more than 5 indexes per table, and the indexes are crazy wide with lots of columns included. I need to get rid of the dead weight first before I start adding indexes – if I add more workload on my already over-taxed storage, I might make the problem worse instead of better.
My weapon of choice is sp_BlitzIndex, our open source index design tool that (among many other things) warns you about duplicate indexes:

In the above case, we’ve got two indexes on the dbo.Posts table, both on AcceptedAnswerId. Every time we insert into dbo.Posts, we’re paying a 2x storage penalty (both on speed and capacity) to maintain those two duplicate indexes. The penalties continue every time we do a backup, checkdb, and even index maintenance.
By default, sp_BlitzIndex focuses on the biggest bang-for-the-buck improvements, things that are really easy to fix. Absolutely identical indexes on big tables fall into that category.
To get even more improvements, you can use the @ThresholdMB parameter to look at smaller tables (it defaults to 250) and the @Mode parameter to run more in-depth tests (it defaults to 0, but use 4 for deeper analysis.)
The Deduping and Eliminating only has to be done once for a database, and that in-depth spring cleaning will help you clear the way to add more indexes later. The Add step will need to be done repeatedly, tuning indexes over time, but if you do a good job of Deduping and Eliminating, you only have to do it once. Just make sure that as you’re adding additional indexes, you’re deduplicating as you go – don’t go creating yourself more problems by adding Clippy’s recommended indexes that are just a subset of indexes you already have.
Heaps Go Last for a Reason
If you’ve done a lot of index tuning, you might think you need to start by putting a clustered index on every table. Heck, even if you haven’t, you’ve probably read our posts about heap problems by now.

However, setting clustered indexes up on tables can be a little invasive and controversial, depending on the folks who built and maintained the database design. They might say things like:
“Nothing about a row makes it unique.”
“The combination of fields to define uniqueness are just too wide.”
“We want to add an identity field to the table, but it’ll require reworking a whole bunch of things about our application.”
“We have a uniqueidentifier on every row, but clustering on a GUID hurts my feelings.”
No worries – hold the heap problems til the end of your tuning process. Make big gains first by removing storage and blocking problems with the D.E. parts, then make select queries go faster with the A.T. parts. Build credibility with the rest of your team, then finish it off with the rest of the D.E.A.T.H. Method.
Obviously, there’s a lot more work in the D.E.A.T.H. Method than I can describe in a post – even just picking the right index to design for a query can take quite a while. That’s why my Mastering Index Tuning class is 3 days long.
2 Comments. Leave new
Great read!
Excellent Brent, just what I needed at just the right time.