When Microsoft first talked about Query Store, I was wildly excited. It’s basically a black box flight data recorder for SQL Server, tracking which queries run over time, how much work they do, and how their execution plans change.
And then it came out, and…hardly anybody used it.
We thought it was a good idea, so we took a few measures to encourage adoption:
- I added a recommendation in sp_Blitz to turn it on
- Erik wrote sp_BlitzQueryStore to make Query Store data easier to understand and use
- SQL ConstantCare encouraged folks to turn it on where it made sense (where they had Cumulative Updates applied to fix QS issues)
And even amongst the population of SQL ConstantCare users, where we actively pester you to turn it on, adoption is still low. Amongst eligible databases, for customers willing to share their data for collective analysis:
- 1,434 databases have it turned on (18%)
- 6,635 databases don’t (82%)
So the question became – why?
We asked, and here’s what customers told us.
We emailed folks who hadn’t turned Query Store on for any of their databases, and here are some of the answers:
I just haven’t had the time to understand what it does, and best practice for implementation, any considerations for this specific server, etc. I fear things I don’t completely understand.
Companies want paranoid people in charge of the databases, so this makes perfect sense. You don’t wanna go flipping switches unless you’re confident they’re going to make things better, not worse.
With each of these answers, I wanted to think about what would change that answer. Here, Microsoft needs to figure out the best practices for implementation based on their experience hosting Azure SQL DB across a lot of customer workloads, and then build those into the SQL Server product as the default settings for Query Store. (Right now, you can kinda feel during QS setup that the defaults are a shot in the dark.)
Basically, just too busy. I’m excited about it, but haven’t had a chance to really look into it too much, and I thought I saw something early on about a possible performance hit or memory usage with it turned on.
Several answers came in with that theme. There have been a few Query Store sessions at GroupBy that mentioned the overhead, and it’s measurable. Folks need more confidence in the expected overhead, plus guidance on whether their own system can safely handle that additional overhead. (This is where our work on telling you if your server is bored comes in handy.)
We have a monitoring tool (SQL Sentry/SentryOne) which allows us to see some of the plans, the database/application is not a critical one (not a high number of users/load), and we have not had a chance to really look into what’s needed from a setup and maintenance perspective. When we migrate our critical application DBs to newer versions, we will most likely look at turning it on for them.
Makes total sense. Query performance tracking isn’t a new need – it’s been around forever – and a lot of folks are using tools that already relieve that pain point.
To change this answer, Query Store would need to be good enough at the “free” price point to beat SentryOne at its paid price point, and that definitely isn’t the case today.
We’re using SQL Server to host our ERP software from a third party vendor. I wasn’t sure what kind of benefit Query Store would give us.
I never thought about this before, but Query Store could be an excellent black box for ISVs to use with their customers. However, I don’t see that happening anytime soon – most ISVs don’t have T-SQL tuning staff on hand to leverage the plan cache, let alone build new tooling to analyze Query Store.
How that influences our development
We’re starting to work on query analysis in SQL ConstantCare®: telling you which queries are causing your performance issues, and how to go about fixing those queries. There are a few different ways we could collect the data:
- Daily sample of the plan cache – which is really easy to do with low overhead, but it has huge blind spots, especially around servers under heavy memory pressure or building dynamic strings
- Frequent samples of more sources – get running queries and the plan cache on a regular basis, like every X minutes – which gets much harder to do, but has less blind spots
- Get it from Query Store – have customers turn on QS, leverage the code Microsoft has already built – which is easy, but only works on 2016+, and even then, people aren’t using it. Could overcome some of that with the open source OpenQueryStore, which works on 2008+, but there are costs and risks involved with building our products on top of that.
But these numbers make the choice clear:
- 72% of customer databases can’t use Query Store (2014 & prior)
- 23% are eligible, but would take training/convincing/testing to use it
- 5% have Query Store enabled
As a tiny company, we gotta pick our battles. Right now, it makes more sense to focus on traditional plan cache analysis – something that will pay off instantly for every single customer we have – rather than try to roll the Query Store evangelism boulder uphill. (I tell you what, though: if I was a DBA again tomorrow, I’d have Query Store on for my databases.)