Microsoft Ignite SQL Server Keynote Liveblog #MSIgnite

SQL Server

Yes, there are actually multiple keynotes here at Ignite, Microsoft’s new infrastructure conference. With so many products (Windows, O365, SharePoint, Dynamics, mobile, Azure, etc.) and so many attendees (over 20K), they had to break things up a little. This morning, CEO Satya Nadella and friends delivered the overall keynote. This afternoon, Microsofties have delivered a handful of different keynotes by topic, and I’m sitting in the SQL Server one.

Post-Event Summary: Microsoft demoed a few of the features discussed in the SQL Server 2016 data sheet:

  • Always Encrypted – data is encrypted by the client drivers, and unencrypted data is never seen by SQL Server (or SSMS, or Profiler, or XE)
  • Operational Analytics – means you can put an in-memory columnstore index atop your in-memory OLTP tables, and do reporting against Hekaton transactional data
  • R analytical workloads will live inside the SQL Server engine just like Hekaton does
  • Support for Windows Server 2016 and up to 12TB of RAM (which is great, given all the in-memory features)
  • The newly acquired DataZen mobile-friendly reports will ship in the box with SQL Server
  • Stretch tables are like table partitioning, but some of the partitions will live in Azure, while the hot/current data lives on-premise in your SQL Server

My Liveblog Archive

Here’s my notes from the session, oldest to newest:

3:15PM – Shawn Bice, Engineering Partner Director, taking the stage to talk about SQL Server 2016 and Azure SQL Data Warehouse.

3:18 – “In this new social environment, people are tweeting.” YOLO.

3:20 – Paraphrasing Shawn: “In the cloud, we want an engineer to come to work, have an idea, and fail as fast as possible. You need to figure out what works and what doesn’t. In the boxed product, you can’t do that. We can try all kinds of things in the cloud, figure out what works, and then put that stuff into the boxed product.”

3:22 – Talking about how the cloud has made things more reliable. I’d be a lot more impressed with this if they hadn’t just totally borked SQL 2014 SP1.

3:23 – “By the time you get SQL Server 2016, we’ll have been running it at cloud scale for many, many months.”

3:24 – The big 3 pillars for this release: mission critical performance, deeper insights across data, and hyperscale cloud.

3:26 – You have a transactional system, then you get the data out via ETL and put it in a data warehouse. It’s delayed by 2-24 hours. If you’re doing your fraud detection in the data warehouse, you’re detecting things a day too late. How can we bring both rows (Hekaton) and columns (ColumnStore) together faster?

3:27 – Rohan Kumar onstage to demo applying an in-memory columnstore index to an in-memory OLTP table. Instant applause just at the explanation of what’s about to happen. Shawn: “Alright, we’re off to a good start!”

3:30 – “Close to zero impact on your OLTP transactional systems.” I know some people will go, “wait, that means there’s an impact!” Well, you never get something for nothing, and with today’s hardware, it’s pretty easy to buy faster gear to pay for eliminating an ETL process altogether.

3:33 – For the record, we can’t see the build number onscreen, but his instance name included CTP2X.

3:34 – On to Always Encrypted. Interesting that there’s a space between these words, unlike AlwaysOn. “Always Encrypted is literally about always encrypting the data.” Requires an enhanced ADO.NET library that uses a column encryption key.

3:36 – Rohan back onstage to show Always Encrypted with…Profiler. I LOVE MY PROFILER. It will never die.

3:42 – To migrate your existing data into Always Encrypted, they’re suggesting using SSIS to pull the table down, encrypt it on the client side, and then push it back into a new table. Not exactly seamless, but it points out that the SQL Server engine is simply not going to be involved in the encryption/decryption process. You’re not going to be directly accessing this data via T-SQL queries in stored procedures – it’s gonna be encrypted there.

3:43 – Support for Windows Server 2016 with up to 12TB of memory. I’m gonna go out on a limb and guess that’s not in SQL Server Standard Edition.

3:44 – PolyBase – query relational and non-relational data with T-SQL by introducing the concept of external tables. “We did it a few years ago but it was only available in our APS/PDW, but it’ll be available to you in the box in SQL Server 2016.”

3:45 – About the acquisition of Revolution Analytics – we’re adding a 4th workload to SQL Server. When we added in-memory OLTP, we didn’t ask you to build separate servers. It’s a feature, you just turn it on. R analytics will be the same thing, hosted in the SQL Server process.

3:48 – Yvonne Haarloev coming onstage to demo DataZen.

3:52 – Looks like the DataZen demos are happening on a Surface. Was rather hoping the mobile strategy demos would be done on, uh, a mobile device.

3:55 – Yay, mobile dashboard demos! The downside is that you have to design a dashboard for each device’s form factor – they’re not responsive design. Still, way better than what we had. But like SSIS, SSAS, and SSRS, this is yet another designer and a totally different code base that BI pros will have to deal with. No cohesive story here – but you just can’t expect one with the recency of that acquisition.

3:56 – “We are very committed to Advanced Analytics and bringing it into the engine. If you found in-memory OLTP to be favorable…” If you did, I would actually love to talk to you. Seriously. Talk to me.

3:59 – Stretch tables – basically, table partitioning that splits the table between on-premise SQL Server and Azure storage. It will support Always Encrypted and Row Level Security. The entire table is online and remains queryable from on-premises apps. Transparent to applications. (I’m just typing from the slide, no opinion here.)

4:00 – Stretch tables are activated by sp_configure ‘remote data archive’, 1 – it’s not turned on by default. From there, you can enable a specific database to stretch to Azure using a wizard that starts with a login to Azure.

4:03 – As part of the Azure sign-in process, Rohan got a call from Microsoft because he has two-factor authentication turned on. He had to put in his PIN, and then the SSMS wizard kept going. Spontaneous applause from the audience – great to see enthusiasm for security features.

4:08 – The stretch tables demo failed due to firewall configs. Oops.

4:13 – Demoing a restore of a database that has stretch tables. The Azure connection metadata is apparently stored in the SQL Server database, so it just gets attached again after the local part of the data is restored.

4:16 – Now switching to Azure SQL Data Warehouse. It has separate storage and compute – you don’t need compute power all the time for data warehouses, like during big batch jobs. You only have to pay for that compute power when you need it for the jobs. It scales elastically, so you can crank up big recommendation horsepower during peak holiday shopping seasons, for example.

4:21 – Rohan back onstage for the first public demo ever of Azure Data Warehouse.

4:27 – Discussing about what work retailers have to do when they have discount battles. What would it really cost us to match another company’s discount? Could we profitably pull that off? Power BI helps answer that question with large amounts of compute quickly.

4:28 – Like this morning, they’re talking up the differentiators between Azure SQL Data Warehouse and Amazon Redshift.

4:29 – Shawn: “This is all real, none of this is faked out.” “I’m more excited than ever.” Yeah, there’s a lot of really good stuff going on here this year. And that’s a wrap!

Previous Post
Reading the SQL Server 2016 Data Sheet
Next Post
SQL Server Version Detection

5 Comments. Leave new

  • Jason Melrose
    May 5, 2015 2:25 pm

    Are there dependencies on Server 2016 technologies for SQL 2016 to take advantage of these new features. Especially Always Encrypted and Round Robin (HA).

    • Jason – at the moment, I haven’t heard anything about Windows dependencies, but I wouldn’t be surprised if both required new client libraries.

  • Sanford Olson
    May 5, 2015 5:29 pm

    Regarding Always Encrypted: How would reporting solutions work (Crystal Reports, SSRS) if only the application (client ADO.NET drivers) can decrypt the data? What about column sizes in the data engine – would a 9-digit SSN still be 9 characters after encryption?

    • Nope… the column sizes are larger, table scans are wider / larger – oh, and you can’t search on any of the encrypted data from ANYWHERE other than the client that can decrypt it. Oh, and how is that key protected – does only one person know it?

      I still have A LOT of unanswered questions here. It will be interesting to *really* see it. I really don’t expect it on something as vital as social security number but I guess that depends on the data. If it’s an EMPLOYEE SSN then we need to be able to search it, etc. But, if it’s SSN for some web-app that’s allowing a user to store their SSN to give it to other apps then maybe this makes sense.

      I think this would have made more sense for something like patient data – accessible only to the doctor, on the client. That’s highly sensitive data that doesn’t need searching capabilities.

      Anyway, it’s still interesting. But, yes, there are A LOT of technical aspects that they just glossed over.

      Great write-up Brent. THANKS!

      • Howdy ma’am! Yeah, they demoed the encryption/decryption pieces in the security roadmap session today:

        Even patient data is kinda tricky. For example, I’ve worked with a teaching hospital that lets patients opt in to sharing their data for clinical studies and patient satisfaction studies. For example, does patient satisfaction and healing rate correlate to their condition, their treatment, the number of times they’ve seen a doctor, the medicine they received, etc. That type of analysis is done in the data warehouse, which means if they started encrypting all of these attributes, they’d have a devil of a time encrypting all the past data – since the SQL 2016 plan is to dump all the to-be-encrypted tables out of SQL Server, then encrypt them on the way back in with SSIS. Doesn’t bode well for that kind of work.

        I do think it makes sense for some new-build apps, particularly ISV apps where there’s a limited amount of code that touches the tables. But v1 is going to be a challenging sell.


Leave a Reply

Your email address will not be published.

Fill out this field
Fill out this field
Please enter a valid email address.