Your app is happily humming along on SQL Server Standard Edition. Things are quiet – too quiet. You’re wondering what you’re missing, and whether you should be on SQL Server Enterprise Edition. Here are 3 warning signs to watch out for.
Developers say, “Any downtime is unacceptable.”
Sure, everybody loves to say that before they realize the cost involved, but some databases really are mission-critical. If your devs say things like, “We need to add an index, but we can’t because that process locks the table for too long,” then it’s time to start leveling up your infrastructure.
The first line of defense is to schedule maintenance tasks after hours and weekends, and for most businesses, that’s enough. Eventually, if your workload becomes 24/7, then the next task is to amp up your hardware to knock out maintenance tasks in less time. The next resort is to start peeling back the level of index maintenance tasks – after all, you probably don’t need to be rebuilding them as often.
But when all else fails, Enterprise Edition adds the ability to perform a lot of index operations (mostly) online. If you’re only going to Enterprise for this, though, think of it as a $5,000 per CPU core tax in order to do operations (mostly) online. (Standard is $2,000 per core, Enterprise is $7,000 per core.)
You need more than 24 CPU cores.
Standard Edition handles a whopping 24 CPU cores – and if you’re on bare metal, that’s 24 physical, 48 with hyperthreading on. That is a lot of CPU power for a database server, because after all, the database server isn’t supposed to be the place where you’re doing CPU-intensive operations. That’s what app servers are for.
As you approach 50-60% CPU usage on a 24-core box, it’s time to look at which queries are using the most CPU resources, and figure out how to tune them. You can do a lot of query & index tuning for less costs than an Enterprise Edition license, for sure. Going from 24 cores of Standard to 32 cores of Enterprise is a $176,000 jump in cost just for the licensing alone! Before you even consider that, you owe it to yourself to run sp_BlitzCache @SortOrder = ‘cpu’ and start questioning what you’re doing on the SQL Server.
I’ve had tons of SQL Critical Care® engagements where folks were considering going to Enterprise Edition, only for us to find obvious easy wins in the most CPU-intensive queries. For example, the client situation that prompted this very blog post: the top CPU-intensive queries were all focused on a small set of configuration tables.
- Short-term fix: index the tables using computed columns to make CPU-intensive queries suddenly faster and sargable
- Mid-term fix: move those tables to a separate 24-core Standard Edition SQL Server dedicated exclusively to that task, with the fastest CPU cores available
- Long-term fix: implement a caching layer in the application to avoid re-querying data that hadn’t changed recently
However, if we hadn’t been able to do that work quickly enough, we’d have had to upgrade the main SQL Server to Enterprise Edition and add more cores.
You need more than 1 Availability Group replica.
Your business comes up with RPO and RTO goals that the common failover clustered instance (FCI) + log shipping solution won’t cover, and you decide that the best way to meet those goals is Availability Groups. In that scenario, you’re probably going to want at least 1 replica in your primary site, and another replica in your disaster recovery site.
Enterprise Edition allows for more than 1 replica, and honestly, I think that’s a good compromise on Microsoft’s part. You can go a really, really long way with FCI + log shipping on Standard Edition. If the business wants near-zero data loss across multiple data centers, well, that’s most likely an Enterprise-grade application.

Those 3 don’t happen often, and they’re completely fair.
I think the above 3 limitations are fair delineators between the wildly-expensive Enterprise Edition and the still-pretty-expensive Standard Edition. Microsoft needs profitable products like SQL Server so they can make speculative investments in stuff like OpenAI.
However, most of the time when I’m working with a company on Enterprise Edition, they didn’t really need to buy Enterprise for any of those reasons.
It’s just that they wanted their SQL Server to handle more memory than a $2,700 laptop.
I think it’s shameful that Microsoft still gatekeeps decent memory behind Enterprise Edition, while simultaneously adding memory-hungry features like In-Memory OLTP, columnstore, and batch mode. With every release, SQL Server needs more memory – but Microsoft still caps Standard at just 128GB.
I’ve said it before and I’ll say it again: SQL Server’s biggest licensing problem isn’t that it’s too expensive – it’s that good hardware has gotten too cheap. If the next release of SQL Server continues to add more memory-hungry features while crippling Standard at 128GB, then the SQL Server marketing team is just digging its own grave.


7 Comments. Leave new
What about a requirement for 512gb of memory?
I agree about the memory. For us it’s this limitation which has been a full stop when considering moving some of our servers from Enterprise to Standard.
Agree 100% — “If the next release of SQL Server continues to add more memory-hungry features while crippling Standard at 128GB, then the SQL Server marketing team is just digging its own grave.”
We made the move to Enterprise primarily for the increased memory. Our developers wanted to make use of OLTP and we have moderately sized (3TB databases).
“I think it’s shameful that Microsoft still gatekeeps decent memory behind Enterprise Edition…”
Could not agree more, Brent.
You allude to this a bit when you say “start questioning what you’re doing on the SQL Server”. You gotta get creative and start asking if putting everything into a SQL Server is needed in 2024. You mentioned implementing a caching tier…that’s a good start.
“No downtime” and the need for more AG replicas may be better handled with SQL-on-Kubernetes and splitting up tables so there is a database that is _always available_ vs tables that can be handled as read-only then looking at doing things like CQRS patterns, doing asynch q’ing, and eventing patterns. These things aren’t easy to do but it makes for better long-term resilience.
There are still way too many folks that stick everything in the SQL Server b/c it’s all they have or all they know. The other end of the spectrum are folks that insist on using NoSQL solutions b/c they are cool without understanding the limitations.
My point is, maybe when you find yourself considering EE you need to first start considering doing a true-up (like Brent suggested) then consider looking at refactoring the larger app architecture. This gives you the oppy to get ready for the future (cloud migrations, AI integrations, etc) that may not be amenable to a big monolithic SQL Server that requires EE. Yes, this costs time and money that might be easier to throw at EE in the short-term.
I hope you’ll indulge me, but this topic is super familiar to me. Personally, here’s where my company’s at:
> Any downtime is unacceptable
> We need more than 24 cores
> We use more than one availability group
and yet, we’re _still_ doing a strong push to use Standard Edition.
We’ve long since given up index rebuilds as a maintenance task.
For brand new indexes we want to introduce to big tables, we rebuild the table using a blue-green kind of swapping technique. It is indeed a pain.
We’ve always made use of caching and are always on the look for CPU-intensive queries. We currently shard by customer and only some of our big customers need more than 24 cores. But the rest can get away with Standard!
We use more than one availability group, but we’re hoping to avoid that requirement too by removing the need for extra replicas (a lot of work). As we make progress, we unlock more servers than can be run on Standard.
We’re also considering bigger projects.
In related news, I’m giving a talk February at Cleveland Data Rocks called “Should we switch to Postgres”
‘Developers say, “Any downtime is unacceptable.”’
I’d be more inclined to agree with ‘Product owners/Business stakeholders say, “Any downtime is unacceptable.”’