Your SQL Server Needs Spring Cleaning, Too.
First things first: pick up that filthy keyboard, take it over to the trash can, turn it upside down, and give it a good, vigorous shake.
Next, go to your biggest SQL Server and run:
Transact-SQL
|
1 2 |
EXEC sp_BlitzIndex @GetAllDatabases = 1, @Mode = 2, @SortOrder = 'rows'; |
This will give you an inventory of all of the objects in your databases, ordered from biggest to smallest. Here’s what to look for:
Ask simple questions about the 10 biggest tables. Are they supposed to be this large, and contain this much history? Do we have history-deletion jobs that are supposed to be running, but aren’t? I swear, this happens at almost every client I work with! By simply cleaning out the unused, unwanted history, we can make our maintenance jobs like DBCC CHECKDB run faster.
Look for objects with names like Backup_ or Temp_ or ToBeDeleted. We’ve all done a quick backup by copying data into another table, and then accidentally forgotten to go back and delete it afterwards. Check the Reads column to make sure it’s 0, and then rename the objects with a prefix like __To_Be_Dropped_On_20250425. Over the coming days, make sure you’ve got a good copy of the backup so that you can undelete those objects if necessary, and then on April 25, do the needful. (And this time, actually drop them – set yourself a calendar reminder.) Again, the goal here is faster maintenance jobs.
Next, look for indexes with Reads = 0, and Writes > 0. These are indexes that are slowing down your inserts, updates, and deletes, and they don’t have the payoff of making select queries go faster. Script out their definitions in an undo script so you can recreate ’em if necessary, and then drop ’em. (I’m not a fan of disabling them because I’ve seen too many homemade index maintenance scripts that simply rebuild indexes without checking the disabled status, and when you rebuild a disabled index, it goes back into place.)
Finally, look for tables with Reads = 0 and Writes = 0. Do we have old versions of tables lying around that used to be part of the production app, but no longer are? Do we still need those?
This work only takes an hour or two, and it reduces dead weight that’s lengthening your backups, DBCC CHECKDB, statistics updates, and more.
Related

Hi! I’m Brent Ozar.
I make Microsoft SQL Server go faster. I love teaching, travel, cars, and laughing. I’m based out of Las Vegas. He/him. I teach SQL Server training classes, or if you haven’t got time for the pain, I’m available for consulting too.
Get Free SQL Stuff
"*" indicates required fields

5 Comments. Leave new
Hey, Beavis. Brant said “Do The needful”. That was cool.
One of the other things I check are people who delete OFFLINE databases and never clean up the underlying files. (Note: Detached databases should be checked to see if they exist before deleting data files)
Instead of deleting databases while offline, I bring them online in single user mode and delete them but not everyone does. I inherited several terabytes of orphaned datafiles on an instance due to a vendor “feature” that allows users to create “simulation databases”. Don’t ask. I still have nightmares.
One of the scripts I adapted is here:
http://www.sqlservercentral.com/scripts/Administration/134742/
Sorry, the previous page I cited is 404. Its archived here:
https://web.archive.org/web/20160715012700/www.sqlservercentral.com/scripts/Administration/134742/
Check this out instead.
https://www.sqlservercentral.com/scripts/searching-for-orphaned-db-files-on-the-sql-server
dbatool.io powershell project has a command to check for orphaned files.
Find-DbaOrphanedFile
Regarding indexes with zero reads, while I completely agree they should be dropped, there’s a slight edge-case to be aware of. It’s possible that the statistic associated with that index is being used in a query plan, but the index itself isn’t touched. Dropping the index will drop the statistic. Coinidentally, I’ve just published a blog about this and would welcome any feedback. https://www.koder.ly/2025/04/sql-server-should-i-drop-unused-indexes/
Sure, but I would argue that the takeaway shouldn’t be “don’t drop your indexes” – it should be to update statistics with fullscan wherever possible, and do it less frequently. That’d solve the problem there nicely.