Reading the SQL Server 2016 Data Sheet

SQL Server
42 Comments

The SQL Server 2016 PDF data sheet is out, giving us the first public look at the next round of features in Microsoft’s database product.

Quick disclaimer: I’m a Microsoft MVP and Certified Master, and we’ve got clients that work with Microsoft. As such, I’m bound by a few non-disclosure agreements. For this post, I’m going to take off my regular hat, and put on my reverse-engineering hat.

“In-memory enhancements: 30X in-memory OLTP + 100X in-memory ColumnStore = Real-time operational analytics” – SQL Server 2014 brought us each of these features individually. ColumnStore indexes really can get you 100X performance on your analytical warehouse-type queries, and Hekaton may or may not give you rocket speed transactional performance. Both of these storage types are very specialized – for serious performance reasons, you don’t want to run analytical queries against your Hekaton tables, and you don’t really want to do transactional delete/update/insert activity against your ColumnStore tables. The graphics in the brochure show one transaction going into both tables. While this might at a glance seem like you get the best of both worlds, keep in mind that both of these say “In-memory” – and you’ll probably need two worlds’ worth of memory to pull this off. Fortunately, servers like that are available today.

“Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without any application changes.” In the past, we’ve basically had two places we could encrypt or decrypt data:

  • In the database – in which case, the database administrator could see private data, thereby giving security professionals nightmares
  • In the application – which also gave the security pros nightmares, because our developers would be rolling their own encryption code

The key phrase here says “protect your data at rest and in-motion…without any application changes.” Microsoft’s implying app-level encryption, keeping those prying DBA admins out. However, if the data is encrypted before it hits the database, you need to consider the following scenarios:

  • Analytics queries – your analytics and reporting apps will have to be equipped to decrypt the data if you need to show it onscreen
  • Range queries – if you need to see all of the users whose last names are like Smi%, you need to decrypt them
  • Replication – if you want to copy the data from one SQL Server to another, it usually needs to be decrypted on the way out

“High Availability…with the ability to have up to 3 synchronous replicas, DTC support, and round-robin load balancing of the secondaries.” No reading between the lines necessary here – these are hotly requested features for AlwaysOn Availability Groups.

“Manage document data with native JSON support.” While this has also been hotly requested, I’d caution you to be careful what you ask for. Users demanded XML support inside SQL Server, and then proceeded to use SQL Server as an XML query engine, sending CPU through the roof. SQL Server is one of the world’s most expensive application servers.

“Scale and Manage – Enhanced performance, scalability and usability across SQL Server Enterprise Information Management tools and Analysis Services.” Careful reading this one, and note which column it’s in – “Deeper Insights Across Data.” While you might read this as a better/faster/stronger SQL Server Management Studio, that’s not what Enterprise Information Management means.

“Powerful Insights on any device – Business insights through rich visualizations on mobile devices. Native apps for Windows, iOS and Android. New modern reports for all browsers.” Microsoft recently acquired DataZen, a reporting tool that focuses on mobile devices, and promptly gave it away free to most Enterprise Edition customers. The simplest way to accomplish this feature would be to simply throw DataZen’s existing code in free with the SQL Server boxed product. Like any acquisition, I wouldn’t expect this to become a seamless part of SQL Server for a year or two at least. (Think Microsoft’s acquisition of DATAllegro back in 2008, then gradually transitioned into Parallel Data Warehouse aka APS. Some will say, “Well, that was a tougher product to build,” but native apps for mobile BI ain’t an easy nut to crack either – as evidenced by Microsoft’s difficulty in getting to that sweet pistachio. Look, this analogy had to go off the rails at some point, we all saw that coming)

“Advanced Analytics at massive scale – Built-in advanced analytics provide the scalability and performance benefits of running your “R” algorithms directly in SQL Server.” Just like you can run your C# code in the database, too. Or your XML parsing. Or your JSON. If you wanna do your processing in a $7k USD per core database server, Microsoft wants to empower you to do it. I would too, if I was wearing their exceedingly good-looking and expensive shoes. I like the way they think.

“Stretch Database technology keeps more of your customers’ historical data at your fingertips by transparently [sic] and stretching your warm and cold OLTP data to Microsoft Azure on-demand without application changes.” Database admins often come to me and say, “I’ve got a ton of data in a table, but most of it just isn’t queried that often. I need to archive it. Will table partitioning help?” Sadly, it usually doesn’t, because good table partitioning requires changes to your application. If you want great performance, you have to make sure your WHERE clause specifically excludes the archival partitions. Otherwise, SQL Server isn’t terribly good at excluding those partitions, and it ends up scanning all of the partitions.

Microsoft sounds like they’re moving the cold table partitions up into Azure storage. That had damn well better not be the only portion of the solution that they deliver – they have gotta deliver better partition elimination. If not, this feature is going to be legendarily bad, because scanning partitions up in Azure is going to be even worse than scanning partitions locally. Questions around a feature like this would include:

  • How will backups work?
  • How will restores work?
  • How will DBCCs work?
  • How will data be moved from on-premise to Azure?
  • Can multiple servers attach to the Azure partitions? (For example, if you regularly back up production and restore it over to development, how will that work?

“Temporal Database: Track historical changes” – Whenever Jeremiah shows me an Oracle feature, I get so jealous. Oracle’s got this really cool feature where you can query a table to see what it looked like at 5PM yesterday, or at the start of the last financial quarter. It ain’t free/cheap – after all, you have to store all the historical data to make those queries work. For scenarios that need that level of archival detail, though, that’s killer.

“Row Level Security: Apply fine-grained access control to table rows based on users rights”I blogged about this when it was unveiled in Azure SQL Database.

“Dynamic Data Masking: Real-time obfuscation of data to prevent unauthorized access.” – We’ve got clients who work with payroll, health, and financial data. They need to restore their production databases into their development and QA environments, but it’s critical that the developers not see personally identifiable information like social security numbers. These numbers are stored in the database. So what’s a DBA to do? Currently, the solution involves scrambling the data as soon as the restore completes, but that’s painful.

“Enhanced Database Caching: Cache data with automatic, multiple TempDB files per instance in multi-core environments” – First off, I would just like to point out that Microsoft uses my preferred capitalization of TempDB. Take that, rest of Brent Ozar Unlimited®. But back to the issue at hand – TempDB (BOOM!) configuration has long been a sore spot for database administrators. TempDB defaults to just one data file, no matter how many cores you have. In a perfect world, Microsoft would fix the SGAM and PFS page contention problem. Or they could just duct tape it by automatically adding multiple files when necessary. Sounds like they chose the latter. I’m okay with that.

“Enterprise-grade Analysis Services” – wait, what was it before?

“PowerBI with on-premises data: New interactive query with Analysis Services. Customer data stays behind your firewall.” – PowerBI looks great, but it’s for Office365 customers only right now. A lot of our clients have smiled and nodded past that. The cloud is indeed the future, and most of us believe most of the data will end up there eventually, but Microsoft’s acknowledging the reality that for most of the market, it’s still the future, not the present.

“Easy migration of on-premises SQL Server: Simple point and click migration to Azure.” Migrating databases isn’t the problem. Moving all of the application servers and related services up to the cloud, that’s where the hard work is.

And now it’s off to the afternoon SQL Server keynote here at Ignite! Wow, this is really turning out to be the conference to be at if you want to see the future of SQL Server.

Previous Post
Microsoft Ignite Morning Keynote Liveblog #MSIgnite
Next Post
Microsoft Ignite SQL Server Keynote Liveblog #MSIgnite

42 Comments. Leave new

  • Regarding encryption – this comes from MSR. A few years back new crypto algorithms were discovered that allow operations on encrypted data (such as x + 1) without decrypting. This might be it.

    Reply
  • Mark Stacey
    May 5, 2015 2:44 am

    Could it not just be that the decryption happens in the Sql Client drivers that the application uses?

    Reply
  • George Walkey
    May 5, 2015 6:17 am

    Enhanced SSIS Designer that supports previous versions of SSIS….nice
    Connor’s Query Store…nice
    Tableau killer analytics….we hope? nice

    MS Needs to decide which parts of this product can be released quicker than every few years.

    And of course, how much of this is available in Standard version?
    Or Make STD have ALL the features, but only 2 cores…or something

    Reply
  • Chris Nelson
    May 5, 2015 9:41 pm

    Sound like all the cool tools are headed for the Enterprise Edition.

    Reply
  • Harry Larsick
    May 6, 2015 10:04 am

    I was hoping SQL 2016 would allow us to have multiple TempDB databases per instance and also have the ability to restore a select set of objects (tables, indexed views, stored porcedures, etc) from a backup .. oh well

    Reply
    • Harry – have you filed requests at http://connect.microsoft.com for those?

      Reply
    • Hey Harry, I had that discussion at SQLBits in 2010 during a SQLCat and very much got the impression that it wasn’t something that was particularly high on the list. You, they (and most people) talk about having multiple TempDbs per instance, but I told them that the Oracle model of a temporary table space per db made far more sense (to me) from a scalability and performance perspective. Either way, I’m not convinced we will see either implementation anytime soon – in fact having just scanned through that session now to listen to their responses, it is fairly clear to me that it probably won’t be changed. I think the argument (and it is a good one) is that the balance between manageability and performance is a strong consideration and that in a mission critical system you would look to reducing TempDb use as far as you can anyway. In those systems you would usually throw big bucks at them and tend towards a 1:1:1:! ratio between physical server, SQL instance and Number of User databases, and (of course) TempDb.

      Reply
  • Since they went out of their way to say “mission critical” so often, and had already designated Standard as not mission critical, I have to assume that “mission critical” code for a feature in Enterprise only and we won’t see any/much of this in Standard. Hope thye at least add JSON to Standard.

    Reply
  • It is ridiculous to think that there is going to be a SQL Server 2016 when in fact most businesses are still running 2008r2.
    Heck, I could not persuade a vendor to upgrade to 2014; they would only certify on 2012 so my entire operation is hindered because a) microsoft is changing too quickly for the business world to keep up and b) the vendors supplying applications are NOT certifying them for the versions that microsoft is pitching.
    Conclusion: MS needs to pull its head out of the dark place and connect with the larger business community that is not going with their upgrades.

    Reply
    • Don – you’re right. And in fact, it’s ridiculous to think that there’s going to be a 2016 Honda Accord when most drivers don’t even have a 2012 car. Honda should pull its head out of its butt and start reacting to the market.

      Reply
      • the difference is that the new model is on the market at least after 5-6-7 years of selling the old model, and most of the times contains really new features, not some facelifting as MS is doing, every two years with the latest SQL servers, trying to convince everyone that they really did a hard work on the new version.
        According to the Honda Accord it’s on the market from 2007 and Honda didn’t produce the new model.

        Reply
        • Gundi – Microsoft has been bringing out new versions of SQL Server every 2 years (not 5-6-7) and the features haven’t been facelifting. Hekaton, columnstore indexes, AlwaysOn AvailabilityGroups – these are pretty dramatic changes. (Not saying they’re all winners, but they’re dramatic changes.)

          Reply
          • But Brent,
            1. Honda does not “end support” for the 2006 Accord.
            2. Honda does not prevent me from going to the used market to buy an additional 2006 Accord.

          • Ray – actually, car manufacturers do indeed stop making parts for their older cars, thereby ending support. Ask a 1960s-70s Porsche aficionado about the crazy expense involved in getting a replacement dashboard, for example.

    • Why do you feel you need to keep up? Just take the versions you need for either features, performance or support. If a new version offers none of these why move to it?

      Reply
    • Jeremie Grund
      May 11, 2015 11:14 am

      I had an ISV app that checked the specific version of SQL Server on service start. I tried to install SP2 for SQL 2012 and the app wouldn’t start anymore.

      It can always be worse.

      Reply
  • Andrea Caldarone
    May 11, 2015 6:51 am

    Once again, the real needed features are not here:
    Microsoft has something similar to a clustered file system since 2010, has RDMA feature since 2012, and Oracle has the RAC (which is based on clustered file system and RDMA) since a lot of years, but we haven’t something similar: no real horizontal scalability, no transaparent application failover (I have to patch my istance I have to do on sunday or nightly!).
    We need to have the ability to restore single object (a table for example) from a backup, DB2 has this feature since URSSS was still there!
    We need to have the option to backup data only (excluding non clustered index) without having to mess with filegroups

    Reply
    • Andrea – so, I gotta ask the obvious question: if you really need those features, why haven’t you switched?

      Reply
      • Andrea Caldarone
        May 14, 2015 3:40 am

        Hello Brent,
        you may have Oracle and SQL Server in your company but you can’t have only Oracle coz a lot of product (for example SharePoint) are not Oracle-aware, so you are obliged to run some SQL Server.
        I’m SQL Server big fan since 15 years and I hope that MS will fill the gap with Oracle with real “mission critical” features as the ones I typed above.

        Reply
  • Any insight in to SAP HANA, Brent? We’re BusinessObjects customers for BI and SAP is pushing their HANA product as if it’s going to be a total game changer – so fast that separate data warehouses will be a thing of the past & you’ll just run reporting right off your transactional systems with views on top to translate the structures to something better suited to reporting. I was just about sold on trying it out when I saw the SQL 2016 announcement & thought maybe their in-memory stuff would be just as good & I’d rather stick with SQL if I can… Curious if you have any insights!

    Reply
  • Do you think that In-Memory OLTP will be included in Sql Server 2016 Standard Edition this time?

    Reply
    • Mike – unfortunately it’s just way too early to tell on licensing features.

      Reply
    • (I should add, though, that you need to check the documentation – in-memory OLTP requires 2x the table size for memory. So if you’ve got a 20GB table, you need 40GB of memory just for the in-memory OLTP objects. That’s not a great idea to use on Standard Edition where memory is limited to begin with.)

      Reply
  • Brent – I’m more of an app dev didn’t even think of using oltp for anything other then session data. I tried using the Sql Server 2014 oltp session state driver and it worked awesome.

    Reply
  • Mike – that’s a rather expensive way to save session state data. Maybe consider alternative free providers: https://www.brentozar.com/archive/2013/10/solving-session-state-slowndownsvideo/

    Reply
    • Great video. Yes before the OLTP idea I originally tried to use a Redis Db on the production Server. Under heavy load I would get random ‘eval’ errors. What’s your experience with Redis Db in production? In-proc is still tough to beat for performance. I’ve had trouble with Redis both on my dev machine and production machine. I guess this is an option: https://msdn.microsoft.com/en-us/library/azure/dn793612.aspx

      Reply
  • Reply
  • Is there any word on SSAS still being limited to windows authentication?

    Reply
  • Any word on licensing changes? They made major changes to MSDN subscriptions but waiting to hear on SQL.

    Reply
  • Toby Ovod-Everett
    June 5, 2015 5:37 pm

    Regarding Always Encrypted and Range Queries, wouldn’t this also affect equality queries? If they’re doing encryption correctly, the data should be salted. If the data is salted, then the encrypted value stored in the database for ‘Smith’ is going to differ from row to row. Otherwise, I can find all the ‘Smith’ rows by finding one row I know to be ‘Smith’ and looking for the others that have the same encrypted value. As a result, since the database server doesn’t know how to decrypt the data, an index is next to useless and finding ‘Smith’ is going to require shipping all of the data over the wire to be decrypted on the client.

    This same issue means you can’t encrypt primary key and foreign key columns, you can’t apply a unique index to an encrypted field, etc., etc. It also probably means no stored procedures that accept query parameters that filter on encrypted fields, etc.

    Either that or there’s no salting going on. That has advantages from the database engine perspective, but also means that an attacker would have a much easier time identifying data using dictionary attacks if they can get their hands on the data rows.

    Reply
  • Michael Keleher
    July 12, 2015 7:03 am

    I never decrypt data to replicate it. I just recreate the keys on the subscriber and transfer the encrypted varbinary column. Assuming that Microsoft provides support for backup and restore of the CEK, this method should work with Always Encrypted. I would also like to know if the CEK is an asymmetric or symmetric key so that I can understand whether I can remove decryption capabilities in some circumstances by removing the private key, if applicable.

    Also, I am concerned that in their examples and documentation, Microsoft is promoting insecure advice on deterministic vs random encryption settings. Deterministic is safe to use when the messages are random, like HR comments on an internal harassment complaint, and never on a column like SSN, which only have 1 billion discrete possibilities, or any personal ID like drivers license or military ID or credit card number. If deterministic encryption is used on SSN, it opens the possibility of creating a hash table at the client. Keep in mind that many users are local administrators on their systems. They can access the certificate store. Does that mean they can get the private key and decrypt the CEK, since the CEK is stored in system tables in the user database?

    Reply
  • […] Looking at a new feature always raises several concerns. Microsoft Certified Master and Data Platform MVP Brent Ozar (blog | Twitter) raised some valuable concerns about the Stretch Database feature in this blog post. […]

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.