How are my servers doing?
How are my databases doing?
If my boss came in and asked which servers needed more horsepower or which ones could be consolidated together, how would I come up with an answer?
I’ve always found it funny that DBAs and sysadmins don’t have good answers to these questions. We roll our own monitoring solutions, cobble things together from a bunch of parts, or fork over money to third party vendors to get a picture of how our environment’s doing.
Virtualization admins, on the other hand, have fantastic answers to these questions. Take this screenshot from VMware vSphere’s management console:
The sysadmin can see a lot of relevant information at a glance: how utilized the host server CPUs are, how utilized the server memory is, each server’s status, and more. Wouldn’t it be nice to get a simple report like that for SQL Server? Check out this screenshot from the new SQL Server 2008 R2 CTP:
The first thing to notice is the two pie charts: Managed Instance Health and Data-tier Application Health.
Not servers and databases – instances and applications. That’s your first hint that things are going to be different long-term.
Virtualization Changed Everything, But Took Time
Looking ahead, Microsoft wants us to start thinking of databases as being less connected to physical servers, and to think of our physical servers as a resource pool. Imagine if databases were self-contained packages that could be moved from server to server – just like virtual servers can be moved from host to host today.
The virtualization push took years to accomplish because there were so many things that had to be handled. We had to figure out how to handle drivers, how to handle resource sharing, how network segregation and storage throughput would work, and even systems management had to be rethought from the ground up. It took a long time to get it right, and today virtualization is pushing into enterprises everywhere.
Today, SQL Server faces struggles not unlike the early days of virtualization. There are several factors that cloud the Utopian vision of moving databases around seamlessly:
- Connection strings – our apps call for their data by a specific server name. If we’re going to abstract servers away, then we need a way to find our data.
- Logins – logins are set at the server level, yet are tied into databases. If we move a database from one server to another, we have to make sure that the login exists on the new server, has the same password, and has the same level of access rights. If the application frequently uses a specific login to call the TRUNCATE TABLE command, for example, we need to know it’ll have that same level of permissions on the new server.
- SSIS/DTS packages – these techniques can be used to pipe data in and out of our database servers, and they’re often tied in with local servers.
- Scheduled jobs – as much as I fight developers who want to put jobs on their servers, reality is that I don’t always win. (Schedulers belong in applications, not databases.)
- Anal retentive DBAs – we know best, right? We finely tune some of our applications so that the data lives on one set of disks, logs live on another set of disks, and maybe indexes or partitioned data live on yet another set of disks. If we start shuffling databases around, we’re going to need to abandon that level of control.
Abstracting all of this stuff out of the database architecture isn’t going to be easy, but SQL Server 2008 R2 is starting to take the first step.
The SQL Server Utility: Virtualization for Databases
SQL Server 2008 R2 introduces the concept of the SQL Server Utility: a pool of resources (instances) that host applications (databases).
The Utility is managed by a Utility Control Point: a single SQL Server instance that gathers configuration and performance data.
All of this is visualized through SQL Server Management Studio’s Utility Explorer:
This dashboard shows some basic metrics about CPU use and storage use at both the instance level and the application level.
Down the road – years down the road – this might provide DBAs with the same level of fast reaction times that virtualization admins currently enjoy. Got a database running out of space? Move it to a server with more space. Got an application burning up too much CPU power? Slide it over to one that’s got the latest and greatest processors.
In order for this concept to work, though, we need to do more than just think differently about our databases; we’re going to need to deploy them differently. SQL Server 2008 R2 introduces the concept of the Data-tier Application (DAC), a self-contained package with everything our applications need in order to store and process their data. It’s not going to work for everyone, and in my next post, I’ll talk about what SQL Server 2008 R2′s DACPack means for DBAs.