We’ve all found those databases. They’re on your production SQL Server instance, but nobody seems to know if they’re being used, or what’s using them.
You could run some sort of trace, but you’d probably drag down performance on your SQL Server. What you need is a quicker way to find out if the database is in use.
Is a Login Using the database?
You want to run sp_whoisactive with a couple of special parameters, so it shows you sessions that are connected even if they aren’t running, and limits the output to sessions using that database:
exec sp_whoisactive @show_sleeping_spids=2, @filter_type='database', @filter='AdventureWorks2012'; GO
- Pros: Super fast, super easy. Lets you see the host name and the last query they ran.
- Cons: Whatever’s using the database might not be connected at the moment, or might be using a different database context.
- Verdict: A good first step.
Are Reads and Writes Happening on Tables in the Database?
Sure, it’s possible that stored procedures or functions are in use that don’t use any local tables, but just looking at whether data access is being done tells you a lot fast.
My favorite way to do this is with sp_BlitzIndex®. Run it with the @mode=2 parameter, copy and paste the data into a spreadsheet, and look at the “Index Usage” column. You can also see the timestamps of most recent seeks, scans, and writes.
exec sp_BlitzIndex @DatabaseName='AdventureWorks2012', @mode=2; GO
- Pros: Very lightweight, and lets you know which tables are in use.
- Cons: If you have fulltext indexes, you may see reads from the fulltext indexing service on those tables (even if users aren’t querying the fulltext index).
- Verdict: Very useful information. A good second step.
Is the Transaction Counter Going Up for the Database?
SQL Server keeps a cumulative track of activity, and you can check this to see if it’s currently going up:
SELECT * FROM sys.dm_os_performance_counters WHERE counter_name like 'Transactions/sec%' and instance_name like 'AdventureWorks2012%'; GO
- Pros: Super lightweight, no extra tools needed.
- Cons: Some reads will occur just in starting up SQL Server. You can’t tell reads from maintenance from reads from user transactions. (Full and log backups will increment the counter.) Also, strangely enough, just plain SELECT statements outside of an explicitly defined transaction don’t cause this counter to go up. (I know, weird, right???)
- How to use it: Take a couple of samples and see if the counter is going up dramatically. If it is going up, that means something’s fairly busy in there. I wouldn’t try to get too scientific with this one, though– it’s just too limited.
- Verdict: Kind of clunky, but could be useful along with the other methods here.
Are there user Execution Plans in the Cache for the Database?
You can also ask the execution plan cache if queries have been running against the database. This query takes advantage of the dm_exec_text_query_plan DMV (as recommended here by the very clever Grant Fritchey):
SELECT SUBSTRING(tx.[text], (qs.statement_start_offset / 2) + 1, (CASE WHEN qs.statement_end_offset =-1 THEN DATALENGTH(tx.text) ELSE qs.statement_end_offset END - qs.statement_start_offset) / 2 + 1) AS QueryText, case when pl.query_plan LIKE '%<MissingIndexes>%' then 1 else 0 end as [Missing Indexes?], qs.execution_count, qs.total_worker_time/execution_count AS avg_cpu_time, qs.total_worker_time AS total_cpu_time, qs.total_logical_reads/execution_count AS avg_logical_reads, qs.total_logical_reads, qs.creation_time AS [plan creation time], qs.last_execution_time [last execution time], CAST(pl.query_plan AS XML) AS sqlplan FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, qs.statement_start_offset, qs.statement_end_offset) AS pl CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS tx WHERE pl.query_plan LIKE '%[AdventureWorks2012]%' ORDER BY execution_count DESC OPTION (RECOMPILE); GO
- Pros: Gives you insight into what is using the database when it finds results
- Cons: Doesn’t catch anything that’s not in the cache due to RECOMPILE hints or memory pressure over time. You’ll probably have to slog through some rows of system procedures.
- Verdict: not perfect, but very information-rich if something does turn out to be using the database (and a whole lot quicker and more lightweight than tracing).
I know there’s a way I haven’t THOUGHT of.
Do you have a lightweight technique you use for this which I haven’t mentioned here? Let me know in the comments!