A few weeks ago, Microsoft announced that SQL Server 2012 will no longer be licensed by the CPU socket, and will instead be licensed by the core. You can read more at Microsoft’s recap or Denny Cherry’s analysis.
After contemplation and discussing the issues with companies, I think of this as a few related announcements.
SQL Server is getting more expensive overall.
This shouldn’t come as a surprise to anybody: good things get more expensive over time, and crappy things get cheaper. This change simply doesn’t affect me because I’ve never paid a single penny for SQL Server myself anyway, and I don’t plan to start now. If I had to pay my own money for a database platform, I’d have switched to PostgreSQL long ago.
When I talked to my clients about this change, they all had one of two reactions:
- “OMG, THIS IS UNBELIEVABLE!” These people were more worried about their individual careers than their budgets: DBA managers wanted to know if SQL Server would start waning in popularity, thereby reducing the value of their skills. (If the high salaries of Oracle DBAs are anything to go by, bring on the licensing price increases.)
- “No big deal, but no new 2012 projects next year.” These people understood that they were already licensed for their existing servers anyway, and only their ongoing maintenance fees would be impacted. However, they immediately crossed Availability Groups off their list of 2012 projects, and that really sucks for me. I’d been so excited about the ability to scale out with multiple read-only replicas, but the pricing just makes this a no-go for most of my clients. Between the licensing changes and the traditional hesitation to deploy before Service Pack 1, 2012 is dead in the water for them.
Nobody decided to throw SQL Server out of the shop altogether, but some of them did start asking tough questions about their future projects. It’s really hard to hire a good production SQL Server DBA right now (email us your resume if you’re looking in the Chicago, Los Angeles, or Portland areas, but no remote workers) so many of our clients are running understaffed. One client said to me, “As long as I’m relying on outsiders for my database administration, what difference does it make whether it’s MSSQL, Oracle, DB2, Postgres, or the cloud?”
SQL Server used to be seen as the middle ground between expensive-but-awesome Oracle and free-but-limited open source. Those days are gone – SQL’s pricing is higher, and open source platforms have gotten pretty darned good.
Limits kick in quickly on Standard Edition.
Standard Edition’s limits haven’t really changed significantly – but today’s hardware has, and Standard Edition isn’t keeping up. If you’re struggling with application performance on Standard and you survive by throwing hardware at the problem, your options run out once you hit 16 cores and 64GB of memory. At that point, you have to throw hardware and licensing at the problem by upgrading to Enterprise Edition.
I don’t think 16 CPU cores is really all that limiting. CPU-intensive SQL Server queries tend to be the easiest ones for me to tune, and I’ve always argued that database servers aren’t app servers anyway. The core-based licensing change just gives me more ammo to tell developers not to do string processing in the database server.
While 10-core Xeon CPUs are already available, they’re not typically deployed in 2-socket configurations. You can technically buy a 4-socket box like the HP DL580 and populate it with 2 10-core CPUs, but that config just doesn’t make sense for Standard Edition due to the high cost. With Intel’s upcoming tick/tock roadmap, the next couple of Xeon families are still slated to be in the 6-10 core range, so I don’t think the 16-core limitation is going to have a dramatic impact in 2012/2013.
The 64GB memory limitation, on the other hand, is frustratingly small given today’s memory prices. 32GB of server memory runs around $1,000, and memory can hide a lot of sins. Standard Edition just doesn’t let you hide sins – you’re forced to spend manpower to keep tuning applications, and unfortunately, that’s not an option with third party applications. I do a lot of work with independent software vendors (ISVs), and they’re frustrated that their customers can’t just buy $1,000 worth of memory to get awesome performance quickly.
There’s an opening for DBAs who love performance tuning.
It’s just you and me here, so let’s be honest: I make money when companies are in pain, and SQL Server’s licensing changes will inflict some pain. Companies can’t just throw more CPU or memory at the problem anymore without writing a big check to Microsoft for additional licensing. As a consultant, I can say, “I’ll fix that problem for less money than Enterprise Edition costs, let alone the cost of a server with more CPU sockets.”
If Microsoft had raised the 64GB memory limit, then companies could afford to mask problems longer by throwing memory at the problem. They can’t, so I win.
The bummer for me is that I just can’t make the math work for scaling out with Availability Groups now. I was really, really hoping for a lower-tier license cost for the read-only replicas (after all, they only support a subset of the work) but the cost to run a 5-node Availability Group is staggering. Even if we use just 2-socket, 6-core servers across the board, that’s $412,440 of licensing – simply unthinkable, even if we throw in discounts. It kills me, because I’m a huge fan of this feature, but right now companies have to be under tremendous pain in order to write a check that large to scale one application. It’s just easier to tune SQL Server, and that’s where I come in.
It’s hard to be upset about that.