Initial Thoughts on 2016 SP1

A really odd choice

I’ve talked a bit about my annoyance with hardware limits for Standard Edition here and here.

If you’re a bunch of SQL-savvy developers who understand the Enterprise Edition features that are now available in 2016 SP1, great. You’re a very small minority of shops. I’m talking about places like StackOverflow and kCura.

If you’re not, prepare to join the breadline of dba.se users asking why Partitioning didn’t speed up their query. This is most shops who are just happy to get the right results back, and mostly interface with SQL via EF or another ORM. Not your target audience for Change Data Capture and Hekaton.

Why these features?

It kind of feels like you’re getting leftovers. There’s a meatloaf edge, congealed creamed spinach, mashed potatoes that have become one with the Styrofoam, and fossilized green beans. You don’t get the Full Monte AGs, TDE, online index operations, etc.

Don’t get me wrong, ColumnStore and Hekaton are neat, but you’re capped at 32GB of RAM each for them. If you’re at a point with your data warehouse or OLTP app where, respectively, those technologies would make sense, you’re not dealing with 32GB of problem data. You’re dealing with hundreds of gigs or more of it.

What still doesn’t quite make sense to me is why Microsoft would open up all these edge case tools that they spent a lot of time and money developing and implementing, rather than saying “here’s all the hardware you can throw at a problem, but if you want the Enterprise features, you gotta pay”.

Enterprise to me means HA/DR (AGs), it means data management tools (Partitioning), it means crazy fast data ingestion (Hekaton), it means security and auditing (all that other stuff).

Facing some tough facts

The amount of people who need all the functionality Microsoft opened up (Multiple Filestream Containers?!) pales in comparison to the amount of people on Standard Edition who have more than 64/128GB of data. And I know something about pale. I used to date a lot of girls who had Bauhaus posters in their bedrooms. It’s just not a very compelling argument to me. People need more memory, and hardware gets cheaper and more powerful every year. They don’t need more features to shoot themselves in the foot with.

If they’re not stepping up to pay $5k more per core going to Enterprise from Standard now, this stuff isn’t going to open any wallets. The amount of money it costs to put an extra thousand dollars of RAM in your server is absurd when you add on the Enterprise Licensing costs.

My proposition would be to raise the base RAM cap on Standard Edition for 2016 to 256GB, and either remove or double it to 512GB for shops that have Software Assurance.

Thanks for reading!

Brent says – Microsoft says they’re trying to make life easier for ISVs to have a single code base and use the same programmability features across the board. If that’s true, then they would also deploy these same capabilities across the other under-support versions of SQL Server (2014, 2012, etc) because after all, ISVs have to deal with clients on those older versions too. But they didn’t – and that’s your first clue that this isn’t really about making life easier for ISVs. It’s about increasing the adoption rate of SQL Server 2016. (Which is a perfectly cool goal, and I salute that – but that’s not the story you’re being told.)

Previous Post
[Video] Office Hours 2016/11/16 (With Transcriptions)
Next Post
The Brighter Side of SQL Server’s future

43 Comments. Leave new

  • Welll, columnstore indexes will still be useful for smaller data warehouses.
    Just a couple of million rows in a fact table for example. A report that runs in 1 second versus 4 seconds is a massive improvement.

    Compression is also a feature that will be very useful in standard edition.

    Reply
    • Sure, up to a point. Do you find many people rolling out DWs on Standard Edition?

      Reply
      • Yes, smaller clients of ours. Typically clients who are just starting with BI.

        Reply
      • Of course. And I don’t know how many DWs you will find that are > 128 GB but will stay < 256 GB, so doubling the limit – while nice – isn't going to magically open up the market either. I do know that with the ability to now use ColumnStore or just normal compression, they have a much better chance of fitting their data into the memory limit.

        Reply
      • Alexander Holy
        November 23, 2016 5:00 am

        we do. we run both Standard and Enterprise editions to keep cost from going insane.
        We love compression in SP1. We would use Hekaton with in-memory column store, but you do not get very far with only 32GB of RAM. Worse, those 32GB are subtracted from your 128GB overall memory.
        I am all for healthy cost/performance distributions. Given todays RAM prices I say 512GB would be fair.
        And remember, no background Index maintenance with (ONLNE=ON) is crippling standard edition anyway…

        Reply
    • Koen – just so we’re clear, you have clients whose reports take 4 seconds, and they’re willing to implement consultant time to build columnstore indexes in 2016 SP1 in order to get it down to 1 second?

      That’s…interesting.

      Reply
      • I seriously hope you’re not suggesting that everyone who wants to try a ColumnStore index to see if it helps their workload needs a consultant to tell them it’s ok? Not all Standard Edition customers are using Standard Edition because they’re incompetent.

        Reply
      • When you say it like that, of course it sounds weird 🙂
        It’s more like this: we build a data warehouse for the client and we help them with their reporting solution. There’s a quite heavy report and it runs in about 4 seconds. The data warehouse is properly tuned, but the server is limited in its capacity and we’re working with SQL Server standard edition. Let’s also assume that the query to the data warehouse can’t be fully tuned, because the client is using a framework in between that generates query (for example, Cognos Framework or SAP BO Universe). In such a case, slapping a clustered columnstore index on the table can drastically improve performance.

        So, to answer your question:
        * we are probably already there doing consulting. Clients are probably not going to hire us just to make some report faster. That would be expensive. (although it has happened before, and I used BlitzIndex 😉
        * the clients sometimes have limited experience/knowledge of the SQL Server product. It’s possible they don’t even know you can use CCI on standard edition since SQL 2016 SP1.

        Keep in mind that I am talking about smaller clients, who typically use Standard Edition. We are now going to be able to build a better performing data warehouse, at no extra cost.

        Reply
        • Koen – we may work in different circles, but I don’t think I’ve ever called a 4-second data warehouse report “quite heavy.”

          Reply
          • It’s just an example. In a POC for a client I improved one report query from 25 seconds to just 1 second by adding an empty filtered non-clustered columnstore index (so it basically does nothing) to a table. The query just went into batch mode and was a lot faster. (it’s a little trick from Itzik)

            My point is simple: with the new features for Standard Edition, you can make data warehouses a lot faster, without the need for upgrading to Enterprise Edition. Smaller clients can now also benefit from all those performance enhancements.

          • Right, I get the super-edge-case scenarios that involve serious expertise (Itzik’s batch mode trick is a great example) but in most cases, in Standard Edition machines I see out in the field, I’m bummed that we’re going to start seeing people implement partitioning to make their queries go faster – when in reality they’re seeing PAGEIOLATCH waits out the wazoo due to 16GB of RAM in their VM, and 2TB of data. (Not exaggerating, saw that on a call today.)

          • Yeah, let’s just hope they try to improve performance by adding columnstore indexes instead 🙂

      • Alexander Holy
        November 23, 2016 5:02 am

        Think real-time dashboards and it does not sound so insane anymore…

        Reply
  • Aaron Bertrand
    November 21, 2016 8:33 am

    I certainly with Koen agree about compression. This feature is useless for Enterprise customers, who would rather spend money on more RAM than give up CPU to save a bit of disk space. But for Standard Edition customers, with limited budgets, who might not be able to completely overhaul their hardware for faster disks or more memory, this can quite a useful way to get these features without jumping up in edition – especially if they had been biding their time for an upgrade anyway.

    You might be right about their goals but as an ISV we are certainly looking forward to being able to use some of these features for *all* of our customers, not just our Enterprise customers. Until our lowest supported version is 2016, we’d still need a forked codebase to pull that off, but it would still benefit a lot more customers in the meantime.

    Reply
    • You might need a new data type to figure out when that will be 🙂

      Reply
      • The other thing I’m hearing from several customers – and I pushed for when the feature delineation was first announced – is that security features shouldn’t be an edition-based decision. A small business shouldn’t have to fork out for Enterprise Edition to use Always Encrypted on a single credit card or SSN column.

        Reply
  • To your point, wouldn’t the amount of money it costs to put an extra thousand dollars of RAM in your server be . . . $1000?

    I appreciate you and your blogs, have a nice day 🙂

    Reply
    • That all depends, many customers have hardware that they’re not willing to replace wholesale, but don’t have any more memory slots. Again, I’m not talking about customers who can afford Enterprise edition, since they can obviously afford new hardware too. I’m talking about Standard edition customers who probably can’t upgrade their hardware for various reasons. Putting these features into all editions really opens some doors, even if doesn’t mean *all* the doors, and even if they’re not the doors Erik is interested in.

      Reply
      • Aaron – wait, they’re running SQL Server 2016 on that old hardware? Doesn’t that seem a little odd? (Or at least, very niche?)

        Reply
        • Hardware doesn’t have to be that old to be out of memory or without any supporting budget. As you and I discussed before, not every customer can just go upgrade their hardware every time you suggest that’s the only solution. Not every Standard Edition customer needs more than 128 GB of memory anyway. None of our production systems are that big, should we use Enterprise Edition?

          Reply
          • Sorry, out of memory *slots*

          • Right, but find me a server that doesn’t have enough memory slots for 256GB where people are running SQL Server 2016. If you’re doing per-core-licensed software on a 5-year-old server, and you have so many performance problems that you’re willing to implement compression or partitioning, we need to talk about your PowerEdge 2950.

    • HAHA, yeah. I forgot a couple words in there. Thanks for catching that.

      Reply
    • Andre – yes, or to put it another way, less than the 20 hours of development time it would take to plan, test, and implement a feature like partitioning correctly. (20 hours x $50/hour (being absurdly low-rate here) = $1,000, so RAM would be the cheaper fix there.)

      Reply
  • We’ve been running 1-5TB data warehouses with 32/48/64GB RAM for many applications during the last 10 years. The reason for the low amount of RAM is that we needed Enterprise features (data compression, partitioning, SSAS partitioning, etc) and because AWS adds cores with memory, and sql server being priced per core going 64->128GB would add 8 cores and going 64->256 would add 24cores so it would be around +3000$ and +9000$ per month just for the sql license and that is way out of our budget of around $5000 total. Thus the new standard edition might work very well for us.

    Reply
    • Psst – I don’t think you’ve been doing data warehouses on AWS EC2 for 10 years. When it was launched in 2006, you got a 1.7GHz core and 1.75GB of RAM, and as you can guess, Windows wasn’t running on it then.

      Reply
      • Well, sure you are right. At the beginning we did it on our own servers with same amount of memory because back then that was a lot of memory. But I think we did do our first AWS data warehouses on something with 16GB RAM and have been stuck with low RAM since then. And I think the first DWs were 0.5-1TB so 64GB was quite ok for that. But later we had to run dozens of different DWs and buying/leasing hardware didn’t feel good. In hindsight it was most certainly a mistake to go this way though, but we were so busy with software we didn’t have time to plan a better infrastructure.

        Reply
  • Ok, for those of us who just spent a while googling the max size of the Columnstore object pool in Enterprise edition to contrast it with the 32GB max you mentioned and came up empty, what’s the equivalent max in Enterprise Edition? Or is it as simple as the max memory setting for the server?

    Reply
  • 2016 SP1, interesting stuff… but Bauhaus! Yes! Sorry, couldn’t help that.

    Reply
  • To understand the change I think you have to start with understanding where their competitive pressure comes from.

    The first markets that this is move is targeted at is the Independent-Software-Vendor, (ISV). Having written a software product that works with everything from Express Edition up to Enterprise, I know it’s a lot of extra work. They’ve just eliminated it for the typical ISV. It makes SQL Server more competitive.

    Then there’s the on-prem database market. One of SQL Server’s value propositions is that you get a lot in the package. You get a great database but also SSIS, SSRS, SSAS, Master Data, Data Quality, etc. Except if you really want to use some of the really nifty features, then you have to pay extra for Enterprise.

    Microsoft is out there competing with an increasing number of database technologies. They’re responding to
    that pressure.

    Reply
    • So the big pitch now is that you get severely hobbled versions of interesting features, a smattering of okay features, no access to the big features, and you get to put it on horribly limited hardware? Interesting market position.

      Reply
      • The question becomes, how much do you reasonably expect them to give you for 25% of the cost?

        And before you answer, remember that these changes were made specifically for programming surface reasons, NOT performance reasons. If they were trying to lure Standard Edition customers for performance reasons, they probably would have just increased max memory and not bothered with any of these changes. Programming surface changes may not interest you all that much, but they certainly me (both as an ISV and a DBA) and they interest many of our customers, too.

        Reply
        • Of course it is to you. You have an intimidating amount of knowledge about SQL Server. Shops like yours and the ones I mentioned are in a very small minority. Most people are going to fall all over themselves with these features, and potentially make things worse. I’d much rather MS give the majority of SQL Server users something that will actually help them (it ain’t 24 cores on Standard Edition, either).

          Reply
          • You could say that about any feature no matter what edition it is limited to… unless you’re suggesting that only incompetent people buy Standard Edition, and Enterprise Edition is only used by people who won’t shoot themselves in the foot? 🙂

            I just don’t understand how any of this could be a bad thing; the only real complaint I see is that max server memory didn’t get an upgrade.

          • Hah. Well, I do know of at least one consultant who won’t work with anyone using Standard Edition.

            Given the trouble I see people have with downloading the right install media for Enterprise Edition suggests its use may not be a mark of wisdom.

            You’re right though, my problem isn’t with any one of those features in particular, they just help an entirely different crowd than I talk to on a regular basis. I’d love to be able to tell people my people that moving to 2016 SP1 would solve a huge problem for them.

  • Erik (and Brent),

    Your wording re: security features in this post and in your SP1 announcement post the other day has me curious. Brent wording was “[SP1 has] Some security features – Always Encrypted, row-level security, and dynamic data masking”, while it still lacked “Serious security features – TDE, auditing”. And Erik’s wording above was that we still don’t get the “Full Monte… TDE…”.

    I’m looking to understand the positive vibes about TDE, especially given the recent takedown at http://simonmcauliffe.com/technology/tde/ that I’m guessing you saw. It seems like AlwaysEncrypted is a better path for shops looking to encrypt at rest. Is it just too much re-work for most shops to seriously consider?

    Reply
    • Aaron – yeah, it’s a spectacular amount of re-work. I’d answer it by asking, how fast could you rework your apps to work with AE? (Especially given the problems around linked servers, reporting, and replication.)

      Reply
      • It’s something they can now think about doing, and sometimes this thinking is for new apps, not ones that are obviously already running on Enterprise Edition. Before this change, AE wasn’t even a possibility for anyone not running Enterprise, and there certainly are some appealing advantages to AE over TDE, even if development isn’t free.

        Reply
  • Bauhaus… I still occasionally put on Bela Lugosi’s Dead… Think I’ll go for it right now.

    Reply
  • loving this discussion. hang on gotta make some more popcorn

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.