Every release lately, Microsoft has been turning the screws on Standard Edition users. We get less CPU power, less memory, and few (if any) new features.
According to Microsoft, if you want to use more than $500 worth of memory in your server, you have to step up to Enterprise Edition. Seriously? Standard Edition licensing costs about $2,000 per CPU core, but it
can only access 64GB of memory? That’s ridiculous. UPDATE: Thanks to complaints like this, Microsoft has raised the Standard Edition limit to 128GB!
SQL Server 2014 New Features that Aren’t In Standard Edition
Take just a quick glance at the SQL Server 2014 edition feature grid and you might be shocked at what Standard Edition doesn’t allow:
- Database snapshots (a huge lifesaver when doing deployments)
- Online reindexing, parallel index operations (wouldn’t you like to use more than one core?)
- Transparent database encryption (because only enterprises store personally identifiable data or sell stuff online, right?)
- Auditing (guess only enterprises need compliance)
- Tons of BI features (because hey, your small business doesn’t have intelligence)
- Any non-deprecated high availability feature (no AlwaysOn Availability Groups – you get database mirroring, but that’s marked for death)
Will We Get New Pricing and Licensing by the Release Date?
Every now and then, I hear managers and DBAs react with shock about how limited Standard is, and how much Enterprise Edition costs – $7,000 per CPU core.
Sometimes they even say, “That’s ludicrous! If I was Microsoft, there’s no way I would do it that way. And we’ve got really savvy developers – I bet we could even write a database engine that could do most of what we need.”
Okay, big shot. Time to put your money where your mouth is.
The world is full of open source databases that are really good. You’re not the only ones frustrated with what Microsoft’s done to SQL Server licensing, and there’s vibrant developer communities hard at work building and improving database servers.
What’s that, you say? You’re too busy? You’d rather keep paying support on your current SQL Server, and keep working on incremental performance improvements to your code and indexes?
Yep, that’s what I thought.
Microsoft won’t change its tack on SQL Server licensing until you start leaving. Therefore, I need you to stop using SQL Server so they’ll start making it better. You know, for me.
Download the SQL Server 2014 Trial for Free
If you’d like to play with Hekaton, clustered column store indexes, or the other new features in 2014, now’s your chance. You can download the trial edition for free, but just keep in mind that we have absolutely no idea what features will be included in each edition when the release date comes.
If You’re Stuck on Standard Edition, There’s Hope
You can save a ton of money on Standard Edition, but you’re going to have to be smarter about how you use it. You can’t just throw it into production and hope it performs.
Learn how to be a performance-tuning DBA – our free 6-month DBA Training Plan email course teaches you from backups all the way up to future-proofing your SQL Server apps for the newest 2014 features. Subscribe now.
Get trained on SQL Server performance – our training classes and videos teach you real-world tips and tricks to make your server fly.
Our SQL Critical Care® can ease performance pains even faster. In just 3-4 days, we work with you as a team, walking through your server together, showing you the coolest scripts and tools to rapidly diagnose the root cause of slow queries. Learn more about how we can help with Standard Edition pains.
I’d *love* to see an open source DB that speaks T-SQL. Well, mainly T-SQL… Enough to make porting code written for SQL server fairly painless. How awesome would a compatibility layer strapped over Postgres be? But like you said: who has the time?
This could be the answer you are looking for
I think it is currently still not on par with t-sql in Sql Server but eventually if many join in to contribute, we will have PostgreSQL that can speak t-sql.
Hoooo, boy, that was good.
You can port your entire MS SQL Server database to more standard SQL and PL/SQL using Oracle or DB2 conversion tools. From there the move to any other database is a lot easier. MS SQL Server and Sybase are the only ones using T-SQL.
Will – if your application is simple enough to use a tool to convert across database platforms, then you’re in extremely good shape. That’s pretty rare though – most apps use database-specific features.
Seriously? You must not write much PL/SQL or DB2 SQL. I write and speak all three. You let me know how that works out for you. And no, it isn’t easier.
Windward’s autotag can do exactly that. You can see a comparison of SSRS and Autotag here: http://www.windward.net/reporting-comparisons/ssrs-comparison/
Many people are shocked when they have to pay for things. I, for one, steal cars. Frequently. After all, why pay for a Porsche when you can drive it into a dumpster and set it on fire after joyriding around all night?
A lot of people are shocked that you are such an idiot. Happens all the time I’m sure.
You’d be surprised. My idiocy is legendary, so most folks aren’t shocked anymore.
I’m not sure Darwin was aiming that at you… but hey, if the shoe fits…
I just had to chime in. 😀
Today I learned that software is a physical object.
Done and Done, I’ve chosen our new reporting platform for these reasons and 1 other major one. Horizontal Scalability. We have SQL server in house and use it for our core DB engine but the whole reporting platform is built off hadoop/hive and elastic search. It only took 2 developers a DBA and an ops guy that new linux 6 months to put that together and we are on our way to being free from the price and scalability limitations. To answer Joel from this list as well Hive speaks T-sql very well I literally copied and pasted code from t-sql to make my etl process in hive and only had to change a few things. All in all the T-SQL portion of my ETL took me less than a week to get working.
Great post as always Brent.
lol “It only took 2 developers a DBA and an ops guy” <—– about $500-750,000/year in human capital vs a tenth to do so with software…
Don’t be so sure about that tenth number. I routinely work with 40-core SQL Servers – that’s $280k licensing per server for Enterprise Edition, per server, and some of my clients have dozens of those servers. The costs add up very fast – especially when maintenance is 1/3 if that per year, for life.
$125/ops guy and developer? Where are you working and do you need help?
I’d say that 4 people, 6 months, that’s more like $160k-$200k in my mind I would say $80k for lots of dbas/developers/ops guy is normal, though people do go into six figures.
However I’d guess that Pat didn’t mean that this was 4 people full time for 6 months. My guess would be that this would be half time or so, while plenty of other duties were still handled.
@steve Yes very much so we handled many other projects as well.
To put this in perspective as well we didn’t just port SQL to use these new items we built a new reporting system and integrated it into an existing system. Many of the uphill battles we faced were legacy code and legacy systems.
We did put a lot of time and human capitol into this project. But when I want to move faster i add simple computer nodes that can be very cheap with no licensing costs, this makes the ability for me to scale my platform easy and cost efficient. 🙂
This is just my opinion though I still love SQL and use it all the time just need to make sure you use the right tool for the job. 🙂
Sometimes companies even cover office space, payroll taxes and benefits… on top of your salary. $250k/year all-in for one employee isn’t unusual.
Thanks Pat! Hopefully I’ll still get to see you around the SQL conferences…
Yup you will i can’t leave my favorite SQL community! 🙂 This just means i can help build more communities. 🙂
I’m not used to seeing your blog posts towards the top of Hacker News (#4 atm), awesome!
Oh great post btw! And yeah… it really hurts for people that don’t want to shell out that kind of money to get “enterprise” features heh. Who could say it better than $500 of RAM, hah! Hope to see you at a SQL event soon. Jeez, should I dust off my Oracle or mysql skills, hmmm…
Hi Rich! Heh, yeah, and of course it hits HN on a night when I need to go to bed early because I’ve got an onsite client gig in the morning, but I just can’t stop refreshing the comments to see what’s going on.
The $500 of RAM thing kills me. It’s less than what SQLCruise cost. 😉 Hope to see you in Miami Beach again soon!
Interesting post, as always, and I mostly agree. The edition/feature list doesn’t look a lot different than 2012, which makes me think it didn’t affect sales enough. To me, the absence of TDE is an issue, along with memory. The other stuff, online work, snapshots, etc do seem to be more Enterprise stuff.
Of course, I’m more of a scale guy. I’d prefer that cost was based on hardware scale, and not features at all. I’d rather see us license $x per CPU and $y/4GB ram, both at some reasonable levels. Do away with the edition stuff.
However, I do think there are lots of clients out there, probably tons that would get by on 16/32/64GB of RAM if the licensing costs weren’t so high for upgrades.
The irony is that most large Enterprise customers like the one I work with Virtualise 90% of their SQL instances and horizontally scale. Most of the missing features are mostly provided in the hypervisors and 64gb is more than enough per instance. They actually only license the Standard edition of SQL.
I have to say that while the limit of 64GB of RAM for Standard edition is silly, it is just plain criminal for BI edition.
It would be nice to see graduated feature sets, so partitioning with a lower limit on the number of partitions, two node Availability Group rather than the eight for full enterprise etc. That way it would be possible for smaller systems to more easily grow into full enterprise systems eventually as people would be more familiar with the features.
Hacker News is the echo chamber.
They’re both echo chambers, but they’re different kinds of echo chambers. They both do produce software, and they’re both valid audiences. It’s like learning about US politics by going to the Democratic convention and the Republican convention. You’re going to hear a lot of hard-core rhetoric, but you can get a fresh perspective fast.
Anyone take a look at NuoDB? Doesn’t have rich feature set like auditing/encryption, but seems to have the scale out, ANSI compliant stuff…
Where is the “enterprise developer” equivalent of Hacker News?
I dunno. When Hacker News is considered a “reality check,” I fear to know what depths of fantastic insanity you regularly delve.
MS are just milking what they think (maybe correctly) are the locked-in users on MSSQL. I’d say MSSQL is already legacy though. With costs like those, and so many terrific free alternatives, who will specify MSSQL for a new development? Anyone have market share figures for MSSQL today? It would be interesting to check again in a couple of years.
SQL Server and Windows Server are doing just fine. Oracle isn’t cheap nor has it ever been, but I don’t see an out of business sign hanging in front of the building and the same applies to the server group at MS. Serious shops are not going to specify “flavor of the month DB” for projects in nearly all cases. They need staff with experience and a vendor who will still be around next year. They also need a better support agreement than an online users group. Sure there are companies that have gone beyond open source and created their own databases, but I’d caution anyone from thinking that is actually the normal thing. It isn’t.
Startups with ‘no money’ use free databases. I have yet to work for a moneyed enterprise that willingly went along with a free anything. I agree with 2:18 – Microsoft ain’t going anywhere. You must not have priced an Oracle RAC installation lately. Microsoft is, was and remains competitive in the market place.
ooops… I just heard something fell through the window!!! It sounded like MMMVeeeeeePeeee!!!
Joke aside, you should always learn something new but currently this is a pick (supported by most vendors) that is cheaper than the Larry E’s database.
so… if I understand correctly you want the enterprise edition at the price of the standard edition… 🙂
I agree 😉
well… I’ll push more, the free edition should contains the enterprise features!!! why limiting this edition too???
we may disagree with the packaging offered, and I agree that some features are questionable; but not the same as yours… its the problem… my concern is the columnstore, data compression and partitioning which are not in the standard edition.
in fact what is missing is a way to pay for what you need only!
Like pay 100$ per feature you want
you want compression? pay for this only, you want encryption, pay for this only etc…
let the customer create the package he needs, don’t think for him from a pure marketing point of view, we dislike this!!!!
Brent – If you’re concerned about cost as it relates to functionality, you could always make the lateral move to Oracle. Oh, wait…
They actually started this nonsense with mem limits on r2, not sure if it really did limit memory. For anyone who bought hardware in the last year with tons of memory you are going to be stuck on r2 unless you want to buck up. I am guessing standard edition is made for companies with 10 employees or less.
The new pricing on Enterprise edition is outrageous. What startup (without millions from VC’s) can afford 80K for EE on a simple 2 CPU, 12 core server? I learned about the pricing, coincidentally, through one of Brent’s consultants who was doing some work for us. I’m going to make the case, from now on, that moving away from SQL Server is the way to go for anyone who has that luxury (understanding that some people are simply stuck). MS has gone bat-sh*t crazy in my opinion. Open source here we come (in the next two years).
Sunil – well, some good news there for startups. Microsoft’s Bizspark program lets you get free Microsoft licensing for the first three years. Here’s more info: http://www.microsoft.com/bizspark/
Thanks. I just put in our application. Regardless f the outcome I’m not sure we’ll stay with MS products in the future. The pricing just isn’t scalable and seems downright regressive given that CPU are generally moving towards more and more cores.
I will be interested to know which open source database support the same features (columnstore, compression, partition, olap , tabular. reporting, ETL…) as Microsoft offer and provide the same level of tools to manage, develop, maintain etc… ?
because as of today I was not able to find any open source database which offer everything.
all the time we have to complete the server with tools created by other companies, become more complicated to install, not always working properly etc…
and you know what? we have to pay for these features… they are rarely offered out of the box…
at the end of the day you’ll pay, in hours of job or in tools which simplify the job…
Jerome – wow, you’re using ALL of the features of Microsoft SQL Server? That’s incredible! Most folks don’t even use half of the database server’s functionality, but you’re using column store indexes, compression, partitioning, OLAP, tabular SSAS, SSRS, and SSIS all on the same server? High score for you! You’re definitely the right shop to stick with SQL Server. You shouldn’t even consider switching.
Most of the time, yes. I’m creating small and medium DW solutions.
its a non sense to install each component on different server when you plan to have 1 running at the same time.
like the ETL run the night, the data is processed during the night, during the day the cubes and reports are used …
so having multiple servers with a lot of RAM and power while you’ll use only a portion on each server is a non sense.
for sure in multiple server env. its a different story.
here having SQL Server cutted in small pieces rather then the full package for each server will be far away better!
we have to be able to use the same license on each server where we install a different component. like 1 ent. license of SQL Server for 4 servers: 1 SQL, 1 OLAP, 1 SSRS, 1 SSIS.
if you need another SSRS server you’ll need another license.
this approach will be fair.
Jerome – right, just to be clear, you can’t break out one SQL Server license and use the components on different servers. Check out the “Licensing SQL Server 2012 Components” section of the SQL Server Licensing Guide PDF:
It specifically states:
“SQL Server software includes a range of licensed server components, including the SQL Server Database Engine(DB), Master Data Services (MDS), Analysis Services (AS), Integration Services (IS), Reporting Services (RS), andData Quality Services (DQS). In addition, a number of management components are provided, such as client applications and tools used for creating or working with analytical data.? For more details on the software components specifically included with SQL Server 2012, visit: http:// msdn.microsoft.com/en-us/library/ms144275.aspx The software components of SQL Server 2012 cannot be separated for use in more than one OSE. If these components are running on a server other than the main database server, then an additional license is required for each additional OSE in which they are installed. For example, if the SQL Server DB is deployed in one OSE and SQL Server RS is deployed in another, both OSEs must be fully licensed for SQL Server 2012 accordingly.”
Yup, you need a separate licence for every server that has a SQL Server component that is installed and configured. It doesn’t even have to be running, just installed and configured.
I have been corking as a consultant for many small, medium and very large clients.
I tend to agree with Jerome that many clients cannot afford licencing all servers, while they can afford setting up the hardware.
The idea of being able to install SSRS , SSAS , SSIS and MSSQL engines on different servers (like the big companies) make total sense.
Microsoft shoulf think about being able to use the engines in different machine for the same licence. This would make the licence cost extremely competitive.
With clients, I always argue that Microsoft is not very expensive if you look at all the features SQL server embarks. However, I am soon limited in my argument when I tend to advise to use all the engines in specific servers…
If Microsoft reads your blog and MS probably does, they should consider this idea.
I do not see why it would be a problem.
I am right there with you, brother! In order to make ends meet for our mid-sized operation, I had to ‘scale back’ from Enterprise to Standard. One thing we didn’t realize until after we committed to it and had our hardware in house… was that MS snuck in that 64GB limitation in a @#$% service pack (running SQL 2008)!! I now have servers with 3x the memory we need in them, tons of CPU, and they’re just constantly IO bound because we can’t cache enough data, can’t partition as well as we need, and there’s no chance of going back to Enterprise because when we made the decision to move to Standard we went with more CPUs. Enterprise pricing is totally out of reach, and Standard’s performance isn’t cutting it. I’m an MS SQL DBA/developer by trade, but I may be forced into other engines by this paradox.
Off topic: For Independent Software Vendor – the price for Enterprise E. is still 1/10?
No, not unless you’ve got some amazing photos of Steve Ballmer in a compromising position.
Ok thx. Some years ago (SQL 2008?) it worked 🙁 I think they want all users to get into the cloud …
My company is moving to MySQL for exactly this reason. It’s been a huge pain considering the laundry list of SQL functionality missing from MySQL (no FULL OUTER JOINS? no table value functions? seriously?).
But if it means they can afford to keep paying me, I guess I’ll jump through MySQL’s hoops.
Interestingly I think large enterprises are actually going to end up spending less after the pricing change. I know mine will. In large part because the pricing change is forcing us to finally consolidate. We will be going from several hundred different licences (some with only one application) down to 30-40 each with multiple applications and by making sure that everyone who touches a development box has a MSDN license (most of us had them already) we don’t have to pay an additional cost for the test/development boxes. So in the end Microsoft actually makes less money from us because of the increased price.
On the other side the smaller shops are going to be squeezed out of using SQL Server until the eventual loss of revenue forces Microsoft to lower the prices. And in all probability it will be late, the tide will have turned and less and less people will use MS SQL Server regardless of the price.
Posts like this one and Steve Jones (http://voiceofthedba.wordpress.com/2013/08/07/the-standard-limitation/) make me think I should spend less of my time studying MS SQL Server and more time looking into other options.
You might want to consider MariaDB – Oracle/Sun might try to kill the commercial side of things for MySQL. https://mariadb.org/
Also look at PostgreSQL (currently on version 9.2) – it does have FULL JOIN syntax, window functions (OVER), point in time recovery, and so on (including things like exclusion constraints, preventing, say, overlapping room reservations at a check constraint level). Management isn’t as easy as SQL Server, but it’s a lot cheaper.
Do any of the other free/open source databases provide equivalent functionality to clustered column store indexes?
Dan – that’s a great question, and I don’t know the other databases well enough to answer it.
Citus does that for PostgreSQL:
I’ve just skimmed over it and don’t have experience with MS columnstore so can’t say anything about how it works.
Thanks for the information here. Could you tell me where could I find or what is the per user license cost for both enterprise and standard edition for SQL server 2014.
Abhinav – the pricing hasn’t been announced yet, but for information about per-user licensing, check out the SQL Server 2012 licensing guide: http://download.microsoft.com/download/7/3/C/73CAD4E0-D0B5-4BE5-AB49-D5B886A5AE00/SQL_Server_2012_Licensing_Reference_Guide.pdf
I am running SQL 2012 standard edition on a VM with 100 GB memory assigned. In task manager it is showing SQL Server as using almost 93 GB of memory. This makes me think that it’s not really limited to 64GB.
Ferde – would you mind emailing screenshots of the SQL Server “about” screen, the Task Manager, and SQL Server’s Perfmon counters for target and total memory?
I was under the impression your article that this memory restriction was *beginning* in 2014 edition, that it wasn’t currently active in 2012 (which is what the commented from Ferde referenced)
Am I misunderstanding or is this 64gb memory restriction already in place on SE (and how far back) ?
Kritner – bad news. It’s been in place since SQL Server 2008 R2: https://www.brentozar.com/archive/2010/06/sql-server-r-standard-supports-less-memory/
When Microsoft launched SQL Server in V4.2 for OS/2 they under priced it and it was not valued by the customers who treated it like Access but with no front end bit. The customers also did not understand why they needed a DBA for such a cheap product.
Pricing is a difficult issue.
Try looking at the prices for an Oracle database license against SQL Server. Then you see what expensive is and anything extra is more money. If you want mirroring (Data Guard in Oracle) it is extra etc etc.
You can license a lot of SQL Server cores for a couple of Oracle cores.
The other factor is Microsoft do provide support for a decent length of time. Oracle will not look at supporting older versions very quickly so you have to upgrade at more cost.
John – Rolls Royces are more expensive than Cadillacs, and Cadillacs have a longer warranty. Does that mean Cadillac is the best buy out of all of the cars?
I know I am late to the game here but I wanted to comment on Ferde’s post regarding SQL Server 2012 standard edition using 93GB of memory. We are running SQL Server 2012 Standard edition and the server physical memory foot print is 132GB. SQL Server’s max server memory was set to 120GB however, when looking at DBCC MEMORYSTATUS “Memory Manager | Target Committed” output we saw 116GB. SQL Server would not consume any more. Now if you looked closely of the DBCC MEMORYSTATUS “Buffer Pool | Database” section you will see the 64GB limit by way of pages after a large workload had executed. As I was also confused I opened a case with Microsoft, this case was quickly escalated to the memory manager team as everyone was confused. I like to share the Microsoft’s final write up regarding what I was seeing, however, I am not sure it was ready for public viewing.
Brent, I can discuss this with you if you like then we can decide whether to make it public.
Alfredo – I would go ahead and discuss the public sharing of it with Microsoft, and then go ahead and share it on the web. This wouldn’t be something I could work with you on for free, unfortunately. (Only got so many hours in the day, and there’s so many interesting things in SQL Server!)
fixed email address
Hello Brent, Rodger that.
Just got confirmation from Microsoft given permission to freely publish their comments. Below is their explanation.
*************** From Microsoft *************
I also found a good reference in terms of changes in memory management for SQL Server 2012. This saved me a lot of time to try and draw the changes in memory manager for reference and I could get this email out within reasonable timeframe.
With SQL Server 2008 the standard edition memory usage was limited by the memory visible to the operating system. Circa SQL Server 2008 R2 this was changed for standard edition to be limited to 64 GB. The Buffer Pool as a pool of 8KB pages was also the source of all 8KB page allocations (Single page allocations or SPA) for memory clerks along with holding data and index pages.
The SKU limitation therefore worked as follows:-
database pages + index pages + sum(single page allocations for all memory clerk) <= min(max server memory setting, physical memory, 64 GB).
The architecture changed significantly with SQL Server 2012. Memory Manager now owns pool of pages and is responsible for all memory allocations (single or multi page). SKU limitation does not apply to Memory Manager’s pool of overall memory. Buffer pool allocations are now deemed to consist ONLY of database pages and index pages. This pool continues to be limited to 64GB. The remaining 8KB allocations (single page allocations) are owned by the Memory Manager itself and is no longer subject to this limitation. This allows the database cache(data and index pages) to take full advantage of the SKU allowed memory.
The SKU limitation now works as follows :-
database pages + index pages <= min(max server memory setting, physical memory, 64 GB).
Note that the max server memory setting and available physical memory will still limit the overall memory in use by the Memory manager. This includes the portion allocated to buffer pool by it.
Let’s review it with an example.
If a server has 64 GB or less memory then behavior of SQL Server 2008 R2 and SQL Server 2012 is identical.
If server has more than 64 GB memory, let’s say 1 TB memory, then beginning with SQL Server 2012 standard edition “Max server memory” value can be set to 950 GB and SQL Server memory manager will use entire 950 GB memory if needed. But the database cache, or total of database pages (index and data pages) will be limited to only 64 GB. Correspondingly in case of SQL Server 2008 R2 standard edition it would still be limited to 64 GB total memory usage for all single page allocation of memory clerks + index pages + data pages.
I hope this provides you with the information you were looking for. Also as I mentioned, the official technical documentation (Books Online, MSDN or KB) will take a while to produce due to review and publishing cycles. But till than you can reference this email as official documentation.
Sr. Escalation Engineer – Microsoft SQL Server Support
So, I’m thinking of moving to mySQL. Barrier 1) is rewriting those stored procedures. Barriers 2..n) are that I don’t know anything about it.
Not to mention that in some overseas markets Microsoft charges much higher fees. In Australia SQL Server licensing is on average 45% higher than USA, last time I did a street price comparison.
This is why I love that my company is a Microsoft partner, we get free internal use licenses for enterprise edition because of our partner status. The caveat is that we always have to be on the latest version of SQL server so we have to keep upgrading, but as a relatively new DBA trying to learn as much as possible, it makes things really interesting 🙂
The memory limitation sounds strangley similar to what used to be in place with SQL 2000. As I recall SQL 2000 standard edition was limited to 2GB of memory and the enterprise edition allowed 16GB of memory.
The limits appeared to be removed upon release of SQL 2005 and memory prices happened to drop to prices per Gig that none of us had ever dreamed. This allowed for large memory and large databases while maintaining very good performance at a fraction of cost when compared to the big enterprise servers and enterp[rise edition. This appears to have cut into the Micorosft business plan and a correction for this was inserted in a service pack (rather sneaky if you ask me).
A question to be considered is… was this purely a financial/business adjustment or is someone trying to say if you are going to use large databases and memory you need to use features in the enterprise edition or both?
For a long time I was a vivid fan of SQL Server and now made the switch to MySQL (MariaDB) and MongoDB and here is what I can say:
MySQL offers platform independence – I wouldn’t pay for OS licensing and then have to pay for SQL Server. I rather run MySQL on Unix and low end hardware and still outperforms any SQL Server DB I use to and still administer.
High Availability / Disaster recovery? While certainly sir…I deployed a 6 node M/M MariaDB Galera cluster in less than 2 hours! Beat that Always on! I tested the cluster to infinity…to give you an idea this what I did:
– I completely unplugged the servers and at the same time I asked the developers to run test to see h?ow much data would be lost…the outcome – none! The cluster came back up after I turned on the servers and Cluster control manager automatically brought the servers on the cluster back on line. While one node was up – the application was running inserts into the database and also adding and dropping test databases – the outcome? All the databases where synchronized and we had 100% data integrity…thats only one test I did to abuse MariaDB – no problems folks!
NoSQL? While certainly – MariaDB has a storage engine for cassandra – but we opted for MongoDB because it has nice adapters for Hadoop! HdInsight? No thank you – I ll keep loyal to its roots!
So I am not downing SQL Server and its following – its not a bad product if you can afford it. I just wanted to share this with you guys to tell you that there is light of the M$ tunnel…just that to be honest its a matter of skills outside of the Windows world and the big bucks. I believe that every IT department should invest in Linux/Unix training and innovation will come.
As far as T-SQL goes…well PL/SQL does a much better job and I can do the same with PostgreSQL as I do with MariaDB…
BI? I wouldnt run complex ETL with SSIS because:
– I need SAP adapters
– I need Oracle adapters
– I need binary adapters (where I use splunk)
– I need platform indepence
Analysis Services? Well I dont need that also…we do everything from our Hadoop clusters.
SSRS? Well we have a guy who writes stuff in Java…so no thanks.
Rommel – great, glad you’ve found a solution that works for you. Thanks for the comment.
what a very pleasant post,
It was pragmatic as allways
I need a clue, yes, just a few words…
How can I achieve a similar results in performance/disk space, without using table and index partitioning and data compression in SQL Server Standard?
thank a lot, congrats for the post
before I forget:
here in Brazil Enterprise costs will kill any chance in making companies take a serious look to database importance and needs.
Today is like this:
– Need performance? -> God dam, buy a new server!
– Need disk space? -> God dam, buy a new server!
– Need it all again? -> Take the best server on market!
I will wait for your clue! thanks
JPE – to get similar results in performance and disk space, we would need to know much more about the bottlenecks you’re facing in your environment. Since we can’t do that kind of one-on-one advice in a blog post comment, your best bet is to check out our free scripts like sp_Blitz® and sp_AskBrent®. Hope that helps!
Most smaller business that might consider Standard couldn’t dream of having >64bg of memory memory in their server.
Enterprises that burn through $100m+ per year shouldn’t have any issues with enterprise pricing.
What do you mean “couldn’t dream of having >64bg (sic) of memory memory (sic)”? If they’re buying SQL Server Standard Edition, the smallest licensing price they can get started with is around $8k, and we’re talking about $1k worth of memory. What, they don’t blink at spending $8k of licensing, but $2k of memory is dream territory? C’mon, man. Reset your expectations.
Don’t forget that Standard Edition is available in the Server/CAL licensing model which can throw this hardware vs licensing cost comparison for smaller firms. For example, you may spend £4k for CALs but then each standard server licence may cost under £700. At that price an extra £500 on RAM is significant.
Incidentally, as a company with 2.5k+ devices, we’re moving away from server/CAL licensing due to consolidation efforts and of course because MS foisted core-only Enterprise upon us…
Willowisp – that’s a great point, but as you’ve noted, it’s really hard to find anybody still on server/CAL licensing.
For SQL server consultants, it is getting harder and harder to find interesting projects on SQL server as licencing cost becomes a huge burden.
I hope in the future, MS can do something about it. Bizspark program is one thing but this is rather annoying to build. I know it, I did it for a client.
I definitely think MS would not lose much licence revenues if they would allow to split engines into different machines. There are tons of companies that need OLTP, OLAP, Reports and ETL and they would be happy with Microsoft if they could do that in several machines.
On another note, simple mirroring or log shipping on a secondary node is possible without paying a licence on the slave node, as long as you do only use the secondary node for emergency.
So, if Microsoft authorized that? Why in the worls they would not authorized splitting the engines! They know that the hardware configuration would need to be different for the many different engines!
They could even consider gaining back the market share of Informatica, Business Objects who both consider that having independent machines are essential to the performance of their engines….
C’mon Microsoft, if you do that, as a consultant, I would eventually sell more licences to my client!
As Brent says, I have to give alternatives to my clients and some have already migrated to Mysql/MongoDb and their own ETL alternatives (their own proprietary code, on .Net, which is pretty funny when you think about it).
Today i have checked site mentioned above ( http://msdn.microsoft.com/en-us/library/cc645993%28v=sql.120%29.aspx#CrossBoxScale )
Maximum memory utilized (per instance of SQL Server Database Engine) for Standard edition
is set to 128GB
So maybe someone in M$ get notified about this licensing issue.
I have said it before on this site (I dont remember when), but you can get all of the features you need for exactly 0$ when using the extremely powerful PostgreSQL. I know most of you will say that “there is no father to the product like SQL Server has one”… but there is! First of all, There are many IT consultants that provide support for PostgreSQL (such as myself).
The PostgreSQL has no limitations and it is an enterprise class RDBMS. It provides OOTB clustering support (via Slony-l) and provides PGAgent which is something like SQL Server Agent (a job scheduler). Plus, it is much easier having CLI commands via crontab in Linux do my stuff which is more secure than having the xp_cmdshell feature opened up. I provide technical support and development for PostgreSQL worldwide. Contact me if you need any other assitance.
Doron – no limitations whatsoever, eh? So have they completely solved the lack of parallelism in queries yet, or is each query still single threaded?
Its a pleasure to meet you. PostgreSQL support parallel query execution now. You can read more about it here: https://wiki.postgresql.org/wiki/Parallel_Query_Execution
Regardless of this issue, there will always be differences between SQL Server and PostgreSQL.The “price” for each customer is calculated seperatly. Some customers would be willing to waive some features of SQL Server to save 7K$ per core. Come to think about it, If I have a server with 2xQuad core Xeon… I would need to pay: 7000 x 8 = 56,000$ (not mentioning the prices for Windows server… which is priced by the core as well). Plus, this calc is done when you only need one server… and who works with one server now-a-days ?
Doron – read that wiki link carefully, and you’ll notice that Postgres still doesn’t automatically multi-thread queries. SQL Server does. Nice try though.
You are absolutly right! PostgreSQL GIVES you the opertunity to pick you choice. In other words, you are NOT locked in a pre-determined product. If you want to enable multi-threaded queries, you can do so if you wish. again: if YOU wish.
Nice try ? try what exactly ? Im not trying to prove anyone wrong…. Im simply telling the story as it is. Most of the enterprise DBs are PostgreSQL/Oracle/Teradata on Linux. Im simply trying to draw out how to keep the same functionality but save more then 70,000$. An organization would purchase support anyway so when it comes to support it realy does
nt matter if you choose SQL Server of PostgreSQL.
When I reviewd the pricing for Windows 2012 R2, it seems that DataCenter costs 6,155 per core. If you calc it up, youll have to pay 56K$ for SQL Server and an additional 49K$ for windows. Why paying 105K$ for what you can get for free ? 105K$ !!!! Sorry… but that had to be said.
Doron – again, you’re simply not telling the truth. There is no “enable multi-threaded queries” in PostgreSQL – you have to break your own query up into multiple queries and then combine the results in the app side.
You said “PostgreSQL has no limitations” and I keep calling you on that. You keep trying to change the subject. If you can’t admit that a piece of software has limitations – and every one does – then you’re not a good architect. You’re just a biased fanboy.
No arguments here anymore. Perhaps you should do some reading.
I understand you point because I used to be an MS-Junkie until I realized how much MS didnt invent anything and how blind I was
I think that the MS Worshipping would gradually go away as MS losses their business day after day. Thats not because of the former CEO that got kicked out…
Thats because the world realizes that MS has nothing to offer anymore. Exepct of desktop windows maybe… a CPA shouldnt have to worry about hashes and errors when he wants to use excel 🙂
If I could ask you for one thing it will be this: print this post and open it up again in 5 years from now. I am sure you will be amazed from your own answers.
“No arguments here anymore” – dude, you still haven’t put up an argument yet. You still haven’t shown how PostgreSQL automatically parallelizes end user queries. I welcome your link to supporting documentation, or your admission that maybe – just maybe – PostgreSQL has a limitation.
Doron, you are, in fact, an idiot.
Honestly, save your keypresses! You’ve probably got better things to do. I’m sure the majority of the enlightened readership here would be hugely suspicious of unfounded claims of software perfection already…
Greg – sadly, I had a few free minutes while I was waiting for one of my class videos to encode, hahaha. I figured I’d throw him a bone.
Wow Brent. You are coming off as an ass today. Who is the greater fool? The fool or the guy who argues with a fool?
Elkay – you’ve caught me in a trap. If I respond to you, I’m a guy arguing with a fool, therefore I’m a fool. CURSES! You have bested me.
I`ve got a better option for you buddy, read the following PDF. Please write back what you then think, I will be happy to hear your thoughts
Doron – a presentation isn’t proof. I’ve got links too:
Doron – perhaps while you’re at PGCon next month, you can attend this session by Robert Haas, the chief architect for EnterpriseDB and a PostgreSQL contributor, about how the current release (9.4) is only laying the groundwork for parallelism to be implemented in the future:
Funny how Doron packed up and left. Guess he needed a SQL Server person to explain to him how PostgreSQL works. I’m sure he’s a great PostgreSQL consultant though.
Not quite packed and left… its the time differences.
But, and that one for your Brent, there
s a time in life that you must agnoledge a mistake in order to learn from it. Unfortunatly, I believed that PostgreSQL can parallel queries. After reading much about it (and trust me... it hasnt left my mind), I found out that it is planned to do so but its on an early stage of development.
So, as should be done professionaly, I would like to take some of my things back and thank you for the insight. I was wrong and you were right! One can always learn from mistakes as long and he understands them.
What I am standing for up until now, is the pricing of MS vs. PostgreSQL. Its a shame, in my humble opinion that MS has priced it so high.
Great, glad I could help you learn PostgreSQL.
About the pricing – let me answer that with a few questions.
Are you wearing the cheapest clothes possible right now?
Are you eating the cheapest food possible?
And is there any database that is cheaper than PostgreSQL?
Well despite it’s price it still got features mssql is sorely lacking f.ex.
Range types which would be awesome to have (we’ve got a ton of daterange constraints and join conditions where it would help alot).
Deferred constraint triggers (used f.ex to check that accounting balances, circular constraints f.ex. there must exist an invoice head if there exists invoice details and there must exist one or more details if there exists an invoice head).
Built in reg. expressions where you can use capture parentheses to filter out data you are interested in (I know you can use some c# to get regexes in mssql but do they work with capture as easy ?)
No real string aggregate (ok, I know the workarounds but for the price you’re paying why haven’t they implemented that ?)
Arrays are nice in some circumstances why don’t mssql have that ?
Updatable CTE’s (one CTE with several inserts/updates in it) would be nice to have but as far as I know that’s not possible (why ? with that price tag).
Synchronous/Asynchronous replication choice per transaction is also nice (some data more important than other).
Serializable snapshot transaction (SSI) which gives the “reader doesn’t block writers and writers doesn’t block readers and still being serializable” property, would also be nice to have but as far as I know doesn’t exist in mssql.
Indexing of non-anchored like predicates (‘%foobar%’) and indexing of reg.expressions would also be nice to have (we’re using a lot of typahead search in combo-boxes and the like and that had for sure helped a lot) but hey it’s mssql so put on the blindfolds and shell out the money.
Anyway at least they got sequences now(in mssql2012) a decade or more after other databases had them so that must have been really hard for them to implement, my congrats to that achievement.
Dan – sounds like you’ve got some really challenging needs there. I certainly wouldn’t want to be involved with a database project that needed things like serializable snapshot and regexes, but hey, that’s your call. Which database server did you end up going with on that project?
What main feature I’m missing down here in Standard Edition land is Availability Groups. The increase of the RAM limit is a nice bone to throw, but not being able to do a simple “1 main server + 1 reports server” setup via the Always On Availability Groups thing is seriously lame; it’s a low blow.
Oh, sure, I can do log shipping just fine; it’s all figured out. But standard client software doesn’t deal well with the times when the report server is restoring the t-logs. Considering Enterprise Edition is 4x the cost of Standard, it’s a no-brainer to code a workaround into our database layer (and write a Data Processing Extension for SSRS to do the same) that just re-tries for a while.
I’d love to upgrade. But at the current licensing costs, we’ll be implementing workarounds first, and then I’ll be stuck researching how we can switch away from Sql Server completely.
We decided to save money and go with the standard version. When users complain about performance, we just give them beer.
Kurt – I bet you’ve got the happiest users around, too!
I’m switching to Oracle where the prices are cheaper!
We are going standard also.. which is no fun but hey…
for SQL server an enterprise license helps bring it down a lot…
MySQL is cheaper enterprise wise but its not free that’s for sure. Enterprise its 5k a pop for “commercial” usage? 10-20k for clustered version.
MariaDB sounds like something to look at for free… I’m sure its working on stuff.
Bizspark is a pretty good hookup for working on your company, start another after 3 years…
I also agree MS needs to hook us up and stop playing the “we will stay cheaper than Oracle!” thinking.
Seriously cut the price in half, sell double the amount you idiots and make more on support, people are moving to MySQL/Maria/Mongo…
ROFL. And I thought today was going to be a bit boring. Yet another reason to read the continuing saga of Brent and the Ozar Mountain boys! (ok, no offense, it just sounded good)
On a more serious note, yea, I’m not happy with the latest and less greatest SQL Server features and costs. Most are not. And it does seem like progress has slooooowed down,
But, Is is me, or does is seem that Oracle/PostgreSQL/MySQL fans always seem a bit insecure. Hey, we know that SQL Server has limitations, but Oracle/PostgreSQL/MySQL fans seems to refuse to admit any weaknesses.
We built the audit trail for our financial model entirely in PostgreSQL 9.3. So I guess what I’m saying is, you’re welcome Brent.
Steve – thanks, sir!
You are bang on the target Brent.Shazam!
SQL Server 2014 Enterprise/standard licensing agreement is just crazy. I am 10 year old SQL Server veteran and I think it time to move on to a platform that is simpler to acquire, scale up / scale out. My team is moving out of SQL Server platform. I know there are MS trolls who must be very upset with this post. Hey it is WYGIWYG.
Brent, fully agree with you!
I do DWH work mainly with SQL server but also with PostgreSQL (9.3).
Both are working nice but the SQL server is much faster than PostgreSQL for *DWH LOADS*.
This is mostly due to the intra-query parallelism which is non existant in PostgreSQL.
I’ve done a ETL test based on the TPC-H data model on 2 identical VMs one running Win2012R2 + SQL 2012 EE and Centos 7 + PostgreSQL 9.3
The VMs had 8 cores and 32G RAM.
The ETL was implemented in SQL (TSQL/pgSQL).
The SQL was about 5-6 times faster than Postgresql.
The bottleneck on the PostgreSQL was the CPU, one core running at 100% and all other mostly idleing.
Running the ETL in parallel (~5 processes) the performance gap decreased to about ~3-4x.
I would really like to do more PostgreSQL projects but the performance gap is too big at the moment.
Maybe when PostgreSQL v10 will be released it will include intra query parallelism and a more nice/natural way to work with table partitions.
The cost is really a big factor in today’s age of opensource is what I have witnessed in my company.
We had a gaming website running entirely on SQL Server. The performance degraded on peak times, after doing some analysis I found PageLatch_EX as biggest wait and it had surely to do with last page insert problem. My biggest tables were 100,200,250 GB with no partitioning and a clustered primary key on identity column.
Before I could have done few tests by using GUIDs for evenly distributed writes, I heard that we are moving to MongoDB. We did eventually, we send data to SQL Server through RapidMQ for reporting.
I would have tested few things on SQL 2014 as it promises to be lock n latch free but I am sure its cost would have rejected this technology at the very moment when I would have given recommendations.
Hey Brent, I spoke w/ Microsoft this morning. In the next version of SQL Server (post-2014), they are including a feature called “AlwaysOn Light” which is included in Standard Edition. It still requires clustering, but will offer synchronous and asynch modes between ONLY two nodes.
Looks like the folks in Redmond are listening…
Mihir – you may want to clarify exactly who at Microsoft you talked to, and whether or not that was covered under NDA.
I work for one of the big financial firm in North America which is in top 10 list. This new Microsoft license has cause quite a chaos in my organization where we are actively migrating apps from sql other DBMS platforms. There are sql dba casualties and I now heard they started looking for alternatives for Microsoft Office, windows 7 and exchange as well. I wish more and more communities leave Microsoft product.
I am looking at MariaDB and researching OrientDB for a graph db…
Microsoft needs to seriously address its pricing/feature mix. Sure Oracle still has the legacy installs and can way over charge its customers but even those buyers are starting to wake up to reality and realize they are getting ripped off. When those customers look for alternatives, Microsoft needs to keep its pricing structure compelling to those customers. They also need to realize they are no longer competing with Oracle, they are competing with MySQL/NoSQL/MariaDB options.
Startups are looking at MS SQL as the same as Oracle when tying their future to technology(Azure is another aspect). Add up the price/features… and they say no…
I hope Microsoft reacts appropriately and addresses it future.
It’s starting to affect jobs more and more, I see less and less Oracle jobs, level SQL Server and increasing NoSQL/MySQL(MariaDB)…
We’re a relatively small shop, very budget conscious and building a new server. Our largest db is 240GB (and growing slowly), a few others over 100GB and another 20 of various sizes.
Disks will be SDDs. RAM will be PC4 17000 with bare metal having 480GB+ and the SQL VM can have whatever it needs of that.
Pricing is roughly the same for: 4cores Enterprise vs 12cores Standard
My original plan was to go with 8cores of Standard (using savings toward hardware) but the RAM cap is concerning.
I know the answer will be that “it depends on workload” but does 4cores of Enterprise seem like enough? I’m thinking that going Enterprise lets me throw like 200-300GB RAM at the problem and hopefully keep most of the small dbs in memory but I don’t know if that’s actually what will happen real world or if we’ll be constantly hampered by only 4 vCPUs?
Bart – this is a really good, complex question, and it’s exactly the kind of question we answer in our consulting services. Unfortunately it’s too big of question for a blog post comment though.
I know that this post is old, but I would like to share my opinion about it. Clearly, Microsoft is in an abusing position on the market, you just have to see the enterprise prizes (for core!!!). I work in a SMB as system administrator (network, DBA, virtualization…the complete pack 🙂 and I have a more generalized view than a pure DBA in relation to some points:
– We depend on the software. Absolutely. For example, if our POS computers use an application that only works in SQL Server, it is mandatory to use it. Or SAP, it uses only a few database engines, and SQL Server is not the most expensive. I know, we can change the software vendor but…
– We have also the backups (creeeepy topic). I work in a heterogeneous environment, and I try to simplify all the process using as few tools as possible. I know that SQL Server will be supported by any backups tool.
– In an enterprise environment, the most user workstations are Windows computers. They come with native tools (.net framework, sql client) so I know that the deployment will be easy.
And there are other topics as virtualization, maintenance plans, etc…I’m not trying to defend Microsoft (in fact, I was looking for other options when I found this post) but I have to admit that SQL Server it is very comfortable to deploy, to manage, to develop…It has a very high cost, but If we compare with other products, perhaps still not worth the change.
Anyway, I think that the best option is precisely this post. Microsoft must to know that we are aware of this changes and that we are willing to complain in case that they abuse too.
That’s all, sorry if my English grammar was not the best!
P.D. When a training in Barcelona, Spain? 🙂
Victor – you just missed me! I was in Portugal last year.
you could always buy Oracle?
Let me just clear one thing about your discussion here, interesting to see argumentation of those comparing postgresql and sqlserver.
1°) the load : did someone compare benchmark about the cpu load of a query from tables with million of records ? I did, be assured the advantage is not for SQLServer (comparable hardware but OS windows for sqlserver and linux for postgresql).
2°) can you avoid any os level I/O for a database like sqlserver ? (I mean reduce the OS I/O request like ‘rawdevice’ did ?)
3°) what language can you link in your sqlserver ? I adapt all my python routine as function into the db and all data are treated within sql function so it means I can emulate a spreadsheet behavior without loading any Excel or such, so here, we just launch the routine with all parameters and data are converted as a spreadsheet would have done … what is very efficient for calculating webpage data information.
4°) about “expensive food” arguments, well I guess that reflection is valid for a “one shot” server installation, but if upgrading version, with sqlserver, need upgrading hardware too, does not it increase the per cpu price ? The day you boss will tell “reduce the cost of IT” will you tell him “sir the expensive food is need for my database” ?
5°) who did try to deal with datestyle in sqlserver ? who can manage a date with different culture without any “CONVERT” routine ? Here we have many different database in use (because of different program installed and each have their own idea and “better argument” to impose such server ? What I know is : if I need to push data from one server to another one … I make it transiting to postgresql database.
We deal daily with million of lines … without failure and without reindexing any … and without parallel queries, responding time is MORE THAN FAST (our servers are months up)
So yes, as first, anybody has the freedom to choose the appropriate database, but “TRUE” benchmark has to be done for it … (by true I mean done conscientiously by you not read from any companies folder).
Second, installing a database is not just “click next” and “done”, it need OS skills to deal with process/cpu/IO/memory/.. tablespace and filesystem itself, so an OS well tuned as also a database well tuned … you will win in performance of any kind … whatever the DB is
Rico – thanks for stopping by. You’ve got a lot of interesting thoughts in there, and I’ll try to boil it down:
It sounds like you’ve done a lot of work in PostgreSQL, and it works well for you.
That’s great to hear. Think about starting to blog – your experience is valuable, and I’m sure lots of other folks would like to read much more details. A blog post comment without numbers doesn’t really do it justice. Let us know when you get it started!
I think Microsoft is finally giving more importance to the Standard Edition since it is the bulk of what Microsoft sells anyways. Enterprise is limited and the introduction of SQL Server Always on basic availability groups is a welcome change. In the long run though, Microsoft wants everyone to have their Data Tier on the cloud and leave MS to do the Database Management. Eventually, there will be no difference between the features on Azure SQL and SQL Server On-Premises. And it makes sense from the cost perspective also. http://faceofit.com