You heard it here first, folks – I’ve got the scoop on what might be the most exciting new feature in Microsoft SQL Server Denali. I hope you’re sitting down.
SQL Server 2008 R2 introduced the data-tier applications (DACs) – packaged databases that could be deployed on Azure or full-blown SQL Server. The initial idea was that databases could be moved around from place to place and upgraded from afar. At the time, I wrote that while the initial version wasn’t worth exploring, future versions could bring us virtualization for databases.
SQL Server Denali’s new contained databases seemed interesting at first, but like the DAC packs, were more of a down payment than an actual deliverable. For example, they don’t really separate out TempDB per contained database – sure, they create objects in the right collation to avoid join problems, but if you’ve got one poorly-behaving app that abuses the buffer pool or TempDB, you’re still screwed. These databases are contained in much the same way that velociraptors were contained in Jurassic Park.
But hold on to your butts.
Enter the Data Director
What if you had a console that let you create or deploy contained databases that were really contained – not by deploying them on an existing server, but by creating a new virtual machine for each individual database?
That day is here.
With Windows Core, we’ve finally got lightweight virtual machines that can be completely locked down and managed. With Hyper-V, we’ve got the ability to light up VMs quickly and easily via an API, which means we can do it inside SQL Server Management Studio. Now, when you deploy a database, you get to pick how many CPUs it gets, how much memory it gets, and what tier of storage it gets.
It’s hard to guess the number of CPUs and amount of memory, though. Project managers lie about schedules and user counts. Developers lie about their code being optimized. New hardware comes in and we have to move things around. Fortunately, we can change these numbers on the fly: SQL Server’s hot-add CPU and memory capabilities haven’t been fully utilized in the wide market yet, but virtualization makes it a no-brainer. Change the dropdown for the number of CPUs and memory, and the virtual hardware is instantly added through the hypervisor, recognized by the OS, and added to SQL Server as well.
Denali’s new AlwaysOn Availability Groups add the ability to scale out to multiple replicas for more read performance and easier disaster recovery. It’s scriptable, so you know what that means – yep, just pick the number of additional replicas you want, and the console takes care of the rest, spinning up additional VMs for you and configuring the scale-out.
Backups? Not only can we take full backups of the database, we can take snapshot backups of the VM host too. We can use storage replication (built into the hypervisor, no matter what storage we’re using) to seamlessly replicate the entire server from our production datacenter over to a disaster recovery datacenter without the hassles of mirroring, log shipping, or replication. Just check a box, and it’s taken care of. All of this integrates with Policy-Based Management – set a policy for production, and all of the new production-class databases you create will inherit this policy.
One of the reasons we need those backups is to restore – whether it’s to development, or to test a new version of SQL Server. With this new feature set, you can simply restore to a new database server name in a matter of seconds thanks to virtualization snapshots. This means when you need to test a new version of Linux, you…
Oh, wait, you caught me.
VMware Killed the DBA Star
This is going to be a hard paragraph for you to read, but here goes. Data Director isn’t a feature of SQL Server Denali. It’s VMware vFabric Data Director. And, uh, it’s for Postgres, not SQL Server. And it might be cheaper than SQL Server Standard Edition for some companies. Here’s a demo video:
I KNOW, right? I shook my head when Microsoft introduced the DAC Pack two years ago, I shook my head at Denali’s contained databases, but my floor shook when I saw what a virtualization vendor managed to pull off in Version 1 of their database appliance. This looks fantastic for run-of-the-mill infrastructure databases.
I know what you’re thinking: who wants one OS per database? Infrastructure managers, that’s who. They want to avoid the hassles of databases stepping on each other just like you do, and they don’t mind throwing hardware at the problem. Hardware is cheap – especially compared to salaries. Why not throw another blade in whenever we add another dozen databases? Let VMware manage the load by moving things around automatically.
If you’re a DBA, and you’re not learning about the cloud – whether it’s public clouds like SQL Azure or private clouds like VMware vSphere – you’re never going to see your career shift coming. And believe me, it’s coming – not this year, maybe not next year, but if you wait until it’s a no-brainer for the CIO to deploy it, then it’s going to be a no-brainer for him to let you go and hire someone who understands these new technologies.
And the dinosaur’s gonna be you.