Blog

Announcing SQLServerUpdates.com

After the SQL 2014 SP1 mess, I thought, “How are people with a real job supposed to keep up with updates?”

Go try to find the most recent SQL Server service packs and cumulative updates on Microsoft.com. It’s miserable – they’re scattered all over the place. Eventually, a lot of us started relying on SQLServerBuilds.blogspot.com, which is cool, but I have no idea who maintains it, and there’s no way to subscribe to updates there whenever a new patch comes out.

And then just try to figure out when support for a version ends – that’s even worse.

So I built SQLServerUpdates.com:

SQLServerUpdates.com

SQLServerUpdates.com

Does what it says on the tin, and you can subscribe via RSS or email as well. Hope that helps!

Upcoming Webcast: Faster Virtual SQL Servers

You want your SQL Server VM to go as fast as possible, but you don’t have a lot of time or budget money, and you’re not allowed to reinstall it or upgrade. Good news – we’ve got a webcast for you.

In part 1, my demo-focused session will explain the VM-friendly performance features of SQL Server 2012, 2014, and even the next version, 2016.

Webcast sponsored by Veeam

Webcast sponsored by Veeam

You’ll learn how to measure a running application’s performance, and then simply by turning a few knobs and switches, you’ll see how performance improves. Microsoft Certified Master Brent Ozar will explain which features are right for your workloads, plus give you resources you can use when you get back to the office to go faster and learn more.

In part 2, Veeam will explain how to get fast transaction-level recovery of SQL databases, including agentless transaction log backup and replay, so you can restore your SQL databases to a precise point in time and achieve low recovery time and point objectives (RTPO™). Learn more at Veeam Explorers™.

This session is for sysadmins and DBAs running production workloads in VMware, Hyper-V, and Xen, whose end users are unhappy with performance.

Register now.

Watch Brent Tune Servers [Video] #MSIgnite

Last week at Microsoft Ignite 2015 in Chicago, I demonstrated how to tune a few SQL Server workloads with my favorite process:

  • Measure how fast the SQL Server is going
  • Check its bottlenecks
  • Apply some easy-to-tweak configuration settings and measure the difference

There’s no camera in this hour-long video, just a screen capture, so you’ll have to imagine my jazz hands.

You can grab the demo scripts here, and grab our related performance tuning resources here.

Databases Need a Kosher Certification

As I picked up a nutritious breakfast from my favorite bakery (love ya, Magnolia), I noticed a sign in the window saying they were certified Kosher. As I usually do, I thought about how that kind of thing applied to databases.

Kosher-certification

This database is totally free of heaps.

The Star-K symbol on food products or restaurants means that the ingredients and processes have been inspected by a rabbi or a kosher certification company. The way I understand it, given my extensive scanning of the Wikipedia entry on kosher certification agencies, a rabbinic field rep closely monitors the restaurant’s ingredient acquisition and food preparation. He makes sure everything matches up with kosher standards, and then puts his reputation on the line to vouch for it.

I’d love to have that for databases.

What if we had a “DBA Approved” stamp that covered things like data model design, index tuning processes, good query writing, and backups that matched the company’s RPO and RTO goals?

There could be a centralized list of people who were approved to give out that certification, and they’d get regular ongoing training to make sure they were qualified to put their stamp of approval on a database.

That’d be the kind of certification I could actually get behind.

The closest I’ve seen so far is kCura’s Best in Service certification for Relativity. It’s not just about DBAs passing a test – it assesses a customer’s hosting environment and actually post the scores live every week at Trust.kCura.com so that end users know which environments are the safest bets to host their e-discovery data. I like this approach a lot, and I wish Microsoft would take a similar approach with SQL Server.

#MSIgnite SQL Server Unplugged Q&A Summary

This afternoon, we had one of my favorite sessions: the off-the-cuff Q&A with some of Microsoft’s best and brightest. I probably missed somebody, but for sure we had Conor Cunningham, Joe Yong, Jos de Bruijn, Kevin Farlee, Luis Carlos Varga Herring, Mike Weiner, Sunil Agarwal, and Tiffany Wissner.

Hubba hubba.

SQL Server Unplugged

SQL Server Unplugged

These sessions are fun to attend because sometimes you can get hints and details that weren’t originally scheduled to go public.

I didn’t transcribe all of the questions – often the answers are well-known amongst smart people like you, dear reader. You’re also attractive too, but I digress.

Q: How do you control engine versioning in Azure SQL Database?

Conor answered: SQL Azure is platform-as-a-service (PaaS). It gets you automatic patching. We build SQL Azure and SQL Server off the same mainline branch of code, and we ship first in SQL Azure. We can turn them on for limited sets of accounts first, then turn it on more broadly in public preview mode, and then eventually make it generally available. We have a whole fabric layer used to manage our versions and deploy them as we need it. You’re getting the next version of SQL Server before it ships to the public.

Followup Q: Can you use a database compatibility level to control the version you’re on?

Conor answered: In January, we launched the new round of v12 servers. Before v12, we didn’t expose compatibility level settings. Now, we’re starting to expose the 130 compatibility level, which will also be the compatibility level in SQL Server 2016. As we get more features aligned to the boxed product, it’ll make more sense.

Q: When I combine AlwaysOn AGs with replication, how do I manage jobs?

Luis answered: you have to manage those manually by copying the jobs to all servers.

Q: Is Always Encrypted going to work with Windows logins?

Conor answered: in the first version, no, but at some point, yes.

Q: What’s the future around DACPACs and deploying a whole app?

Conor answered: Windows Azure has a deployment and servicing model that they’ve been working on, and we’ve been talking about how we can make it work with DACPACs. We don’t have anything to announce today. I’ll be honest with you – there’s a lot of problems with doing this correctly, especially with complex deployments like AlwaysOn AGs.

Q: Besides a single SSDT, are there any other investments in developer tooling or ALM?

Conor answered: We haven’t been doing a ton of major investments there. Anything you’re interested in? (Data generation and automated builds.) There’s some stuff we’ve been working on with the Visual Studio team, but they’re driving it. We don’t have anything to announce in the SQL Server 2016 release time frame though.

Q: I have SQL Server sprawl. How are you helping customers consolidate and move to Azure? 

Conor answered with a huge smile: We’ve got the MAPS toolkit, but there’s a couple of things we’re working on that we are working hard on and we can’t share yet.

Luis added: We’re working on making it easier to look at your current database and your AlwaysOn AG configuration, and recommend a set of VMs with the right power in Azure.

Q: In SQL Server 2014, there are a lot of limitations on in-memory OLTP. Is that still true in SQL 2016?

Kevin answered: In 2014, we targeted specific scenarios where we knew we could be successful, and we nailed those. In this release, we’re working on getting it as broad as we can. SQL 2016 will have foreign key constraints between in-memory tables, but not regular tables.

Q: We’re looking at rolling out MDS, but we’re not happy with the current UI. Is Microsoft improving that?

Mike: I’ll have to defer that. (There are no BI experts in this particular session.)

Q: What’s enhanced about the Backup to Azure, and can we get an easier cleanup in maintenance plans?

Luis answered: We deployed it in SQL 2014, and we heard your feedback. We’re now supporting cheaper blob storage and striping (beyond the 1TB limit, up to 32TB). Backup will be parallelized across those stripes, so it’ll be even faster. That will be out in the next CTP. For automated cleanup, consider the managed backup feature rather than maintenance plans.

Q: How will licensing work in SQL Server 2016?

Tiffany answered: We haven’t made those decisions at this point in time. We just went through some significant changes in licensing, and most of our customers are still absorbing that change.

Conor added: You’re asking, is it possible to put all features in all editions? We don’t have anything to announce today. If you look at SQL Azure, it has most of the same features in all editions, and we vary pricing by resources, and that’s something to think about. But we don’t have anything to announce today.

The end – at least of my summary. I had to leave a little early to meet Erika, Jeremiah, and Kendra for dinner. (Gotta love it when conferences happen in my home town.)

Steal This Slide Deck: What I Learned About SQL Server at #MSIgnite 2015

If you attended Microsoft Ignite in Chicago this week, you’re probably barely keeping your eyes open at this point. There’s been so many after-hours get-togethers, and it’s been tough to get any spare time between sessions. (Jeez, these things are spread out all over McCormick.)

One of the things I love about Ignite is that the videos and presentations are available during the conference itself. That makes it easy to catch up on the sessions I wasn’t able to attend, and then I can go talk to the presenters with my questions the next day.

I realized I should turn my session notes into a slide deck, and give it away. I’ll update it a couple more times this week as more sessions finish.

Attendees – you can go back to the office, and run through this deck as a lunch-and-learn with your coworkers who couldn’t go. This way, your boss will be more likely to send you back next year because you were such a good note-taker.

User group leaders – you can use this as a quick intro for your chapter, before your next meeting, to get ’em up to speed fast on SQL Server 2016.

This deck is licensed in the public domain, meaning you can do anything you want with it. Enjoy!

How to Fake Load Tests with SQLQueryStress

Load testing – real, serious load testing – is hard.

In a perfect world, you want to exactly simulate the kinds of queries that the end users will be throwing at your SQL Server. However, in the words of a timeless philosopher, ain’t nobody got time for that.

Instead, let’s use Adam Machanic’s neato free tool SQLQueryStress to fake it. This is an oldie but goldie app that will run any one query thousands of times (or more) from dozens of sessions (or more), all from the comfort of your desktop:

SQLQueryStress in action

SQLQueryStress in action

After you install it on your desktop (or a VM in the data center, whatever, just not the SQL Server you’re trying to load test), click the Database button to set up your connection string. In this instance, I’m pointing it at one of my Availability Groups, using Windows authentication. As soon as I set the server and auth methods, the database list gets populated so I can set my default database:

Setting up your connection string

Setting up your connection string

Then it’s time to pick the query to run.

But you want to test more than one query at a time, right? You want to test a variety of different queries running all at once.

Rather than calling a single query, call a “shell” stored procedure that runs other queries. Here’s how it works:

  1. Declare an integer, and set it to a random number
  2. Based on the mod of that number, run a stored procedure
    (for example, if it’s divisible by 3, run sp_C,
    else if it’s divisible by 2, run sp_B,
    else run sp_A.)

Since SQLQueryStress will be calling this stored proc dozens of times at once, you’ll end up with a variety of different queries running simultaneously.

Let’s get a little more complex. Here’s what mine looks like for one of my query tuning demos:

CREATE PROCEDURE [dbo].[GetShell] WITH RECOMPILE
AS
SET NOCOUNT ON
DECLARE @Id INT
SELECT @Id = CAST(RAND() * 10000000 AS INT)
IF @Id % 7 = 0
    EXEC dbo.Refresh_ReportByVoteType
ELSE IF @Id % 6 = 0
    EXEC dbo.Refresh_ReportByBadge
ELSE IF @Id % 5 = 0
    EXEC dbo.GetBadgesDetails @Id
ELSE IF @Id % 4 = 0
    EXEC dbo.GetCommentsDetails @Id
ELSE IF @Id % 3 = 0
    EXEC dbo.GetPostsDetails @Id
ELSE IF @Id % 2 = 0 AND @@SPID % 2 = 0
    EXEC dbo.GetUsersDetails @Id
ELSE
    EXEC dbo.GetVotesDetails @Id
GO

WITH RECOMPILE – I use this because I don’t want the GetShell stored procedure to show up in my execution plan stats. The work involved with building this execution plan isn’t significant, and it won’t be the largest part of my workload. (Oh, I wish it were.) All of the stored procs it calls will still show up in the plan cache, though.

@Id parameter – note that some of the stored procs take an @Id. For example, the stored proc GetBadgesDetails takes @Id, and uses that to look up a particular badge number’s details. This is handy because each of my stored procedures don’t have random number generators – they’re designed to mimic more real-world stored procs that have input values. If you wanted to get really fancy and test procs with lots of parameters, you’ll need to generate those in GetShell. You don’t want to hard-code the same values because then that relevant table data will end up getting cached in memory.

@@SPID – some of my workload queries simulate blocking. Due to the wonders of random number generation and very fast queries, if a blocking chain starts on any two sessions, then eventually the rest of the sessions will also call the stored proc that’s susceptible to blocking. After a few seconds, the only symptom my server will have is blocking – and that’s no fun. Instead, by using @@SPID %2 before calling GetVotesDetails (which gets blocked in my scenario), I make sure that no more than half of my sessions will get blocked at once.

The end result is beautiful – well, at least if you want something that looks like a production server getting hammered with all kinds of different queries:

AAAAAHHHH, THE SERVER'S ON FIRE

AAAAAHHHH, THE SERVER’S ON FIRE

I love using this quick-load-generation technique in our performance tuning classes. It’s a great way to show a server that looks like home, and gets students to figure out which queries are causing problems – and which ones are just harmless background noise.

SQL Server 2016 Security Roadmap Session Notes #MSIgnite

Packed room for a security session at 9AM. Think about that for a second.

Packed room for a security session at 9AM. Think about that for a second.

Here’s my notes from this morning’s SQL Server 2016 security session by Jakub Szymaszek and Joachim Hammer at Ignite 2015 in Chicago:

The first previews of SQL Server 2016 this summer will include the Always Encrypted, row-level security, and dynamic data masking features.

Dynamic Data Masking

Say you need to give developers access to your production database, but you’ve got personally identifiable data in there like social security number or credit card number. For example, say you want to refresh your development server by restoring the production backups onto it.

The old way was to immediately scramble (or “mask”) the data after you restore the backup. That’s time-intensive and hard work, ripping through all of that data on disk. Generates a lot of writes and transaction log activity.

The new way, Dynamic Data Masking, will mask the data on the fly as your queries run. It’ll be available in both SQL Server 2016 and Azure SQL Database, but the demo was done in Azure SQL Database. (Very nice-looking UI for it, by the way.)

You define privileged logins, and everybody else gets the masked data. This is set at the login level, unlike row-level security (which is done via functions and application settings like context_info.) This does mean that web apps with connection pools using a single login for all database access won’t work as-is with dynamic masking.

For Azure SQL Database, only SQL logins are supported, not Azure Active Directory. For SQL Server 2016, both SQL logins and AD logins (and groups) will be supported.

Then you define masking rules – for a table, for a column, you can use a masking field format. Masking formats let you preserve data formats similar to the source, such as the current date/time for date/time fields, or xxx@xxx.com for email addresses, or you can build your own masking formats.

Dynamic masking does not affect your ability to query data – for example, if you select all users with a birthdate of 1975/06/06, the rows returned will match that date, but the birthdates shown will be masked.

Always Encrypted: Handling PII in the Database

Transparent Data Encryption (TDE) encrypts data at rest, before you write it to disk. That’s good, but the evil, nasty, corrupt database administrator can still query it and sell it to fund that beautiful classic Singer he’s been eyeing.

SQL Server 2016’s new Always Encrypted feature lets the developers encrypt and decrypt personally identifiable data (email, SSN, credit card number, etc) in the application, automatically, in ADO.NET. This way, only the developers get to sell the personally identifiable data. This also marks my last day as a DBA, because I’ll be learning C# to finance my car collection. But I digress.

The keys are never given to SQL Server. All encryption and decryption are done in the application. The “trust boundary” stops with ADO.NET – SQL Server is considered untrusted.

To implement it, the security admin (not necessarily the DBA) will:

  1. Generate a column encryption key (CEK) and a column master key (CMK)
  2. Encrypt the CEK and store them application-side (a certificate store, HSM, Azure Key Vault, etc)
  3. Store the master key securely (under your bed)
  4. Hand the encrypted CEK to the DBA, who will upload it to the database (it’s inside the user database, so it will automatically propagate to AlwaysOn AG replicas)

The encrypted keys will be exposed in new system tables, plus in SSMS under the database’s Security tree, under Always Encrypted Keys.

There will be two kinds of encryption available in SQL Server 2016:

  • Randomized: if you encrypt the same value (“12345″) twice, you’ll get two different encrypted values. Great for super-duper-security, but useless if you need to join between tables or do GROUP BYs.
  • Deterministic encryption: if you encrypt the same value multiple times, you’ll always get the same encrypted values. Less secure, but lets you do equality operations, WHERE clauses, SELECT DISTINCT, GROUP BY, etc.

If you try unsupported operations like range searches, arithmetic, pattern operations, etc., the query will fail. When they say unsupported, they mean the query simply won’t work, not that it will be slow.

When creating a table, there’s a new field-level parameter like this:

ENCRYPTED WITH (ENCRYPTION_TYPE = DETERMINISTIC, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256', COLUMN_ENCRYPTION_KEY 'MYSUPERSEEKRET'

You get to pick the encryption type (deterministic or random) at the field level because in a single table, you might need deterministic for birthdate, and randomized for SSN. You have to know your workload and how your app queries the fields, since you can’t join/group/distinct the randomized ones.

Migrating to Always Encrypted, and Dealing with the Results

To encrypt your existing data, you have to build a new table, export the data out of SQL Server, and push the data back in. The encryption is simply not done in SQL Server, full stop – it’s only done in the client libraries. This means you’ll be looking at an outage to implement this in an existing database, and most likely using a tool like SSIS 2016, which will have the new ADO.NET driver and thus the encryption/decryption capabilities.

Once the data is encrypted, and you query it from an application that doesn’t have the encryption set up, the returned datatype will be a varbinary.

This is a really important concept, so I’ll put it another way:

When you run SELECT BirthDate FROM dbo.Users:

  • Run from an encryption-aware app with the keys: BirthDate will be a DATETIME
  • Run any other application: BirthDate will be a VARBINARY

Make sure you’re ready for that when you move to Always Encrypted.

Row-Level Security in SQL Server 2016

I blogged about RLS a few months back when it was announced for Azure SQL Database, and the same concepts apply here. Go read that if you’re new to the concept first, and I’ll only cover the updates and changes here.

RLS’s security policies have to use schema binding to make sure malicious users don’t drop the columns used by the filter predicate.

The security identifier in the demos uses CONTEXT_INFO(), which brings some security risks and limitations. From the stage, they acknowledged CONTEXT_INFO has been limiting in the past, but they’ll have announcements about that in the next couple of weeks. It’ll be more like an XML property bag.

Transparent Data Encryption in Azure SQL Database

Available on v12 servers, all SQL DB editions. “Security shouldn’t be a differentiator between editions – every edition of SQL Server should be secure.” Woohoo! Wonder if that will carry through to the boxed product.

Uses Intel’s AES-NI hardware acceleration, so it’s fast.

Azure manages your keys, so it’s a service-managed TDE. The development team got nicknamed the click-click-done team because implementation is so easy. (They showed a demo, and it’s just a GUI switch – turn it on, encryption happens, you’re done. Nice.) You can also do it with T-SQL in two lines – create the encryption key, and alter database set encryption on. You don’t get access to the keys – and you don’t need them, since you can’t restore an Azure SQL Database onto your own on-premise instances.

The gotcha there is that you can’t rotate the keys yourself. They’ve heard that complaint, and they have a new version coming later this summer to help.

There’s a performance hit, but it’s hard to quantify because it depends on your workload. As with the boxed product, when you turn on TDE, TempDB is also encrypted.

Summary: Lots of Neat Security Stuff Coming.

It’s hard to get me excited about security, but there’s a lot of neat stuff happening here, and they’re features that real world users will appreciate. There’s implementation challenges and performance monitoring challenges across the board, but it looks like a heck of a v1.

SQL Server Version Detection

Every now and then, you need to figure out which version of SQL Server you’re using without knowing in advance. This might happen in a script (like sp_BlitzCache) or you might be using it in a migration.

Getting the SQL Server Version with @@VERSION

THe first thing that comes to mind is @@VERSION. On my SQL Server 2014 RTM installation, this returns an ugly string. Unless you like parsing multiline strings in T-SQL, you’re going to be left using LIKE on a hideous string like this one:

Microsoft SQL Server 2014 - 12.0.2000.8 (X64) 
    Feb 20 2014 20:04:26 
    Copyright (c) Microsoft Corporation
    Developer Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)

Imagine if you needed to get the build number of SQL Server out of there to validate against a list of builds to see if you had the current version of SQL Server installed. Pulling the build out of the major.minor.build.revision string isn’t easy when you first have to pull that string out of a bigger string.

There’s a better way than mucking around in @@VERSION.

Getting the SQL Server Version with SERVERPROPERTY

The SERVERPROPERTY built-in function has a lot of great functionality. We’re only going to concern ourselves with the ProductVersion portion of SERVERPROPERTY. How do we use it?

SELECT CAST(SERVERPROPERTY('ProductVersion') as nvarchar(128))

That should return nothing but the major.minor.build.revision for the current version of SQL Server. In my case, this returns: 12.0.2000.8.

Rather than parse the values myself, I created a temporary table with computed columns:

CREATE TABLE #checkversion (
    version nvarchar(128),
    common_version AS SUBSTRING(version, 1, CHARINDEX('.', version) + 1 ),
    major AS PARSENAME(CONVERT(VARCHAR(32), version), 4),
    minor AS PARSENAME(CONVERT(VARCHAR(32), version), 3),
    build AS PARSENAME(CONVERT(varchar(32), version), 2),
    revision AS PARSENAME(CONVERT(VARCHAR(32), version), 1)
);

INSERT INTO #checkversion (version)
SELECT CAST(SERVERPROPERTY('ProductVersion') AS NVARCHAR(128)) ;

The common_version column will display the version of SQL Server as a floating point number – 12.0 for SQL Server 2014 or, in the case of SQL Server 2008R2, 10.5. When you’re targeting scripts for specific families of SQL Server, knowing the high level version can make scripting a lot easier.

What’s the final output look like for me?

version common_version major minor build revision
12.0.2000.8 12.0 12 0 2000 8

There you have it

That’s really all there is to finding and parsing the SQL Server version. Just a simple insert into a temporary table and a few computed columns and you’re ready to go.

Microsoft Ignite SQL Server Keynote Liveblog #MSIgnite

Yes, there are actually multiple keynotes here at Ignite, Microsoft’s new infrastructure conference. With so many products (Windows, O365, SharePoint, Dynamics, mobile, Azure, etc.) and so many attendees (over 20K), they had to break things up a little. This morning, CEO Satya Nadella and friends delivered the overall keynote. This afternoon, Microsofties have delivered a handful of different keynotes by topic, and I’m sitting in the SQL Server one.

Post-Event Summary: Microsoft demoed a few of the features discussed in the SQL Server 2016 data sheet:

  • Always Encrypted – data is encrypted by the client drivers, and unencrypted data is never seen by SQL Server (or SSMS, or Profiler, or XE)
  • Operational Analytics – means you can put an in-memory columnstore index atop your in-memory OLTP tables, and do reporting against Hekaton transactional data
  • R analytical workloads will live inside the SQL Server engine just like Hekaton does
  • Support for Windows Server 2016 and up to 12TB of RAM (which is great, given all the in-memory features)
  • The newly acquired DataZen mobile-friendly reports will ship in the box with SQL Server
  • Stretch tables are like table partitioning, but some of the partitions will live in Azure, while the hot/current data lives on-premise in your SQL Server

Watch the Keynote Recording

Hot diggety, I love Microsoft for the fast turnaround on their session recordings. Here’s the full session:

My Liveblog Archive

Here’s my notes from the session, oldest to newest:

3:15PM – Shawn Bice, Engineering Partner Director, taking the stage to talk about SQL Server 2016 and Azure SQL Data Warehouse.

3:18 – “In this new social environment, people are tweeting.” YOLO.

3:20 – Paraphrasing Shawn: “In the cloud, we want an engineer to come to work, have an idea, and fail as fast as possible. You need to figure out what works and what doesn’t. In the boxed product, you can’t do that. We can try all kinds of things in the cloud, figure out what works, and then put that stuff into the boxed product.”

3:22 – Talking about how the cloud has made things more reliable. I’d be a lot more impressed with this if they hadn’t just totally borked SQL 2014 SP1.

3:23 – “By the time you get SQL Server 2016, we’ll have been running it at cloud scale for many, many months.”

3:24 – The big 3 pillars for this release: mission critical performance, deeper insights across data, and hyperscale cloud.

3:26 – You have a transactional system, then you get the data out via ETL and put it in a data warehouse. It’s delayed by 2-24 hours. If you’re doing your fraud detection in the data warehouse, you’re detecting things a day too late. How can we bring both rows (Hekaton) and columns (ColumnStore) together faster?

3:27 – Rohan Kumar onstage to demo applying an in-memory columnstore index to an in-memory OLTP table. Instant applause just at the explanation of what’s about to happen. Shawn: “Alright, we’re off to a good start!”

3:30 – “Close to zero impact on your OLTP transactional systems.” I know some people will go, “wait, that means there’s an impact!” Well, you never get something for nothing, and with today’s hardware, it’s pretty easy to buy faster gear to pay for eliminating an ETL process altogether.

3:33 – For the record, we can’t see the build number onscreen, but his instance name included CTP2X.

3:34 – On to Always Encrypted. Interesting that there’s a space between these words, unlike AlwaysOn. “Always Encrypted is literally about always encrypting the data.” Requires an enhanced ADO.NET library that uses a column encryption key.

3:36 – Rohan back onstage to show Always Encrypted with…Profiler. I LOVE MY PROFILER. It will never die.

3:42 – To migrate your existing data into Always Encrypted, they’re suggesting using SSIS to pull the table down, encrypt it on the client side, and then push it back into a new table. Not exactly seamless, but it points out that the SQL Server engine is simply not going to be involved in the encryption/decryption process. You’re not going to be directly accessing this data via T-SQL queries in stored procedures – it’s gonna be encrypted there.

3:43 – Support for Windows Server 2016 with up to 12TB of memory. I’m gonna go out on a limb and guess that’s not in SQL Server Standard Edition.

3:44 – PolyBase – query relational and non-relational data with T-SQL by introducing the concept of external tables. “We did it a few years ago but it was only available in our APS/PDW, but it’ll be available to you in the box in SQL Server 2016.”

3:45 – About the acquisition of Revolution Analytics – we’re adding a 4th workload to SQL Server. When we added in-memory OLTP, we didn’t ask you to build separate servers. It’s a feature, you just turn it on. R analytics will be the same thing, hosted in the SQL Server process.

3:48 – Yvonne Haarloev coming onstage to demo DataZen.

3:52 – Looks like the DataZen demos are happening on a Surface. Was rather hoping the mobile strategy demos would be done on, uh, a mobile device.

3:55 – Yay, mobile dashboard demos! The downside is that you have to design a dashboard for each device’s form factor – they’re not responsive design. Still, way better than what we had. But like SSIS, SSAS, and SSRS, this is yet another designer and a totally different code base that BI pros will have to deal with. No cohesive story here – but you just can’t expect one with the recency of that acquisition.

3:56 – “We are very committed to Advanced Analytics and bringing it into the engine. If you found in-memory OLTP to be favorable…” If you did, I would actually love to talk to you. Seriously. Talk to me.

3:59 – Stretch tables – basically, table partitioning that splits the table between on-premise SQL Server and Azure storage. It will support Always Encrypted and Row Level Security. The entire table is online and remains queryable from on-premises apps. Transparent to applications. (I’m just typing from the slide, no opinion here.)

4:00 – Stretch tables are activated by sp_configure ‘remote data archive’, 1 – it’s not turned on by default. From there, you can enable a specific database to stretch to Azure using a wizard that starts with a login to Azure.

4:03 – As part of the Azure sign-in process, Rohan got a call from Microsoft because he has two-factor authentication turned on. He had to put in his PIN, and then the SSMS wizard kept going. Spontaneous applause from the audience – great to see enthusiasm for security features.

4:08 – The stretch tables demo failed due to firewall configs. Oops.

4:13 – Demoing a restore of a database that has stretch tables. The Azure connection metadata is apparently stored in the SQL Server database, so it just gets attached again after the local part of the data is restored.

4:16 – Now switching to Azure SQL Data Warehouse. It has separate storage and compute – you don’t need compute power all the time for data warehouses, like during big batch jobs. You only have to pay for that compute power when you need it for the jobs. It scales elastically, so you can crank up big recommendation horsepower during peak holiday shopping seasons, for example.

4:21 – Rohan back onstage for the first public demo ever of Azure Data Warehouse.

4:27 – Discussing about what work retailers have to do when they have discount battles. What would it really cost us to match another company’s discount? Could we profitably pull that off? Power BI helps answer that question with large amounts of compute quickly.

4:28 – Like this morning, they’re talking up the differentiators between Azure SQL Data Warehouse and Amazon Redshift.

4:29 – Shawn: “This is all real, none of this is faked out.” “I’m more excited than ever.” Yeah, there’s a lot of really good stuff going on here this year. And that’s a wrap!

css.php