Blog

DBAs and developers: right now, your Windows admin is talking to management about all the time, money and hardware he can save by virtualizing servers.

He might not start with SQL Servers first, because frankly, he’s scared of database administrators.  DBAs only have one answer: “No.”

Your sysadmin knows you will say no.

Your sysadmin knows how you roll.

Thing is, though, he’s going to start by eating his own dogfood: he’s going to virtualize the servers he manages himself, like file & print servers, secondary domain controllers and DNS servers.  He’s going to prove that it makes sense, and he’s going to get excited about how much easier it makes his job.  He’s going to sell it to management, and because it does make sense for servers like that, they’re going to get excited too.  The next thing that happens is they’ll mandate virtualization across the board – including your precious SQL Servers.

I know, because I used to manage VMware servers, and yes, we had some virtual SQL Servers.

To understand why it happens, it helps to be armed with knowledge about why virtualization really does make sense for a lot of servers.  Today, I’m going to cover some of the more popular benefits.

Virtual Hardware Drivers and Firmware Aren’t Tied Into the OS

Virtualization abstracts the hardware away from the operating system.  When you run VMware ESX on an HP BL460c blade, for example, the virtual servers running inside ESX don’t need any HP drivers.  Instead, they use a set of VMware video, network, and storage drivers.  These VMware drivers are the same no matter what kind of video card, network card or storage adapter ESX is hooked up to.

This enables sysadmins to shut a virtual server down, copy it to another host server, and start it up without going through a messy driver installation.  It just boots right up as if nothing’s changed.  Some restrictions apply: some older virtualization platforms and some older servers may still install tweaked drivers, like if you move from an AMD to an Intel server or vice versa.  Newer families of processors and newer versions of virtualization software do a better job of hiding these differences from virtual guests.

When sysadmins don’t have to hassle with hardware drivers and compatibility, it makes troubleshooting and maintenance easier.  The more servers in the shop (especially servers of different brands), the easier this gets.

In addition, this makes hardware changes easier.  When servers get old, their annual maintenance fees from the manufacturer increase.  Eventually, it becomes more cost-effective to buy a new server than it does to continue paying maintenance on an old, underpowered server.  Normally this would mean time-intensive reinstalls (or risky backup/restore tricks), but with virtualization, the sysadmin can simply shut the old server down, move the virtual guests onto faster/newer/cheaper hardware, and start them back up again.

Even better, if you’re running virtualization with SAN-based servers, you can move virtual guests around without even shutting them down.

Move Virtual Servers Between Hosts On The Fly

This concept is key to a lot of the benefits I’m going to describe later.  I know you’re not going to believe this, but it really does work.  For VMware ESX users, it’s been working for years.  It requires that the VMware hosts use a SAN, and all servers need to be able to see the same storage so that any virtual server can be started up on any host server.

Here’s how vMotion works in a nutshell:

  1. The VMware admin right-clicks on a guest server and starts the migration process by picking which new host server it should run on.
  2. VMware copies the contents of the server’s memory over the network to the new host, and keeps them both in sync.
  3. When they’re identical, VMware transfers control over to the new host’s hardware.
  4. The guest server is now running on a completely different server.

When it’s done right with high-speed networking and fast servers, the handoff can be completely transparent to end users.

If you haven’t seen vMotion in action, again, I know you’re not going to believe me, but trust me that it works.  I vMotioned servers all the time from one host to another and nobody had a clue.  Well, granted, some of my users were clueless anyway, but that’s another story.

This ability to slide virtual servers around to different hardware at any time, in real time, without taking the virtual server down, opens up a world of benefits.

Do Routine Maintenance Tasks During The Weekday

Even though virtualization removes problematic hardware drivers from the guest OS, there’s still hardware maintenance to be done.  The sysadmin still has to update firmware, update the bios, fix broken hardware, and move hardware around in the datacenter.  The difference is that with virtualization, the sysadmin can do these tasks on Tuesday at 10am instead of Saturday at 10pm.  He can simply evacuate all of the virtual guests off a server in real time, then take his time doing the necessary maintenance work during the day while he’s chock full of coffee.

I gotta confess: as a DBA, I’m jealous of this capability.  Sure, in theory, SQL Server 2005’s database mirroring meant that I could move databases from the production server to a secondary server with a minimum of downtime, but it’s still very noticeable to connected applications.  The connection drops, transactions fail, and my phone rings.  I long for the day when I can move databases around the datacenter undetected.  Until then, I’ll be coming in to work on Saturday nights.

Better Utilization Rates on Cheaper Hardware

Just in case you’ve been busy reading SQL Server Magazine and ignoring Fortune, we’re having a little problem with the economy right now.  There isn’t one.  (By the way, you might want to check the rest of your mail too, because Bernie Madoff has some bad news about your account.)

Utilization isn’t a metric DBAs normally deal with, but for sysadmins, it means the percentage of horsepower we’re actually using on our servers.  To see an oversimplified version, go into Task Manager on your desktop and look at your CPU utilization rates.  It’s probably low, and it’s probably low on your SQL Server as well.  If you’re averaging 10%, stop and think for a minute about what it might be like to use 1/10th the number of servers and still have enough power to get the job done.

Granted, there’s issues with peaks and valleys, and we have to make sure all servers don’t need full power at the same time.  But if we pooled enough resources together, we could still easily cut the amount of hardware in half and still be way overpowered.

Virtualization enables sysadmins to do this because they can run more virtual guests per physical host.  It’s not uncommon to see 8-15 virtual servers running on a single $10,000 blade server.  More servers in less space, using less power, requiring less cooling and less networking gear – it’s a pretty compelling story to companies looking to cut costs. Back when I worked for Southern Wine, I wrote an HP c-Class Blade Chassis review, and you might find that interesting.

Would you rather have ‘em cut hardware costs, or cut people costs?

Easier Capacity Growth & Planning for Virtual Servers

When servers share resources in groups, it’s easier to reallocate hardware and plan for growth.  When I managed our VMware farm, I had enough capacity that I had wiggle room.  If someone needed a testbed server for a few weeks, I could easily spin up a new virtual server for them without stress.  I didn’t have to hunt around for leftover hardware, find space in the datacenter, get it wired up, and so on.  I simply deployed a template server (a simple right-click-and-deploy task with VMware, happened in a matter of minutes) and added it to the domain.  When they were done, I deleted the virtual server.  It’s a great way to win friends and favors.

At budget time, I didn’t have to take a magnifying glass to every single server to figure out exactly how it would grow and how much money I’d need to spend.  I just budgeted a ballpark number for growth, and incrementally added servers to the resource pool over the year.  When my reseller could cut me a great deal, I picked up new blades.  I wasn’t forced to rush purchases at the last minute without negotiating for good prices.

So Why Should You Care About Virtualization?

As a DBA or developer, most of these benefits don’t really matter to you.  You don’t care whether your sysadmins have to work weekends, or whether they have an easier job managing capacity.  I just wanted to explain to you why they’re going to push virtualization, because there are indeed some real benefits for sysadmins.  There’s benefits to you too, but your sysadmin isn’t going to realize those, and isn’t going to sell you on ‘em.  In my next post, I’ll explain why you might want to virtualize some of your SQL Servers.

My Best Practices for Virtualizing SQL Server on VMware

I’ve got a 5-hour video training session on how to manage SQL Server in VMware and shared storage. Check it out now!

↑ Back to top
  1. Issues:

    Secondary domain controller? Actually, DCs probably shouldn’t be virtualized. If they are, backups using native processes (or backup software which use native processes) should be used for restore/recovery. Otherwise you run into a potential USN problem.

    With respect to Vmotion, check Jonathan Kehayias’ blog about an issue with ESX and SQL Servers using Lock Pages in Memory. It was still a problem before he headed off to DI school. We’ve not tried it, so I can’t reproduce. However, Jonathan has reproduced the issue consistently.

    Benefits:

    Recovery becomes easier on the SQL Server side with respect to a hardware failure. No doubt.

    Upping capacity becomes more do-able, too. Put another host in with more capacity, move it over, and you’re running with more memory and faster processors. We’re not just talking dealing with end of life. We’re also talking about being able to try and spec systems real close, realizing we’ve underspec’ed, and being able to do something about it immediately. That change in attitude can mean a huge difference in overall hardware cost for an organization.

  2. Yeah, another drawback with virtualizing DCs is that if your VMware farm goes down, those DCs are down too. We had that happen a few times at Southern – we needed to take the entire VMware farm down for various reasons, but we needed to have the DC’s still up. That was kinda problematic.

  3. We run an ESX environment with both virtual and hardware SQL servers. We use the Hardware for the mission critical stuff but we have a SQL server running as a virtual and it does really well. It has worked out great for reporting services which far under utilized here.

  4. We have a mixed environment, using VMWare for dev servers and reporting services, as well as most other (non-SQL) servers.

    One advantage of using physical servers is multipathing. The current release of ESX does not support multipathing and this can make a big difference to performance. I understand that the next release will include a basic round-robin algorithm, so it will be interesting to see what impact that has.

  5. Richard – absolutely, that’s one of the drawbacks I’ll talk about in my disadvantages post on Thursday.

  6. I have a home DNS and other IT infestructure running. Since I have a local webserver here, I have been running more and more into the “large scale” issues of constant uptime, etc. This has been “brought home” by haivng to move around my main server (a repurposed laptop) in order to clean carpets (of all things). Do you know if I’ll get similar functionality benefits from the free offerings from VMWare, (the lousy performant) VirtualPC, or other? I’d love to be able to migrate on the fly to a desktop or wireless laptop in an emergency without incurring downtime to possible Google crawls, etc.

  7. We have a large VM production implementation > 300 servers with over 70 SQL servers including production. Included in this is data warehousing with servers > tb databases. As long as you are sensible on what you put on each ESX cluster so that you are giving performance and resources to machines that need it and not just trying to stack as many VMs over as few physical clusters you can make this work. I think one of the major benefits that is not really mentioned in the article is the flexibility when it comes to DR. We use san replication to our offsite DC for our critical applications and servers. We are able to get our environment up and running in a relative short space of time. Especially when comparing to maintaining multiple physical environments and relying on backup restore techniques which can fraught with problems related to patch levels etc.
    Also worth mentioning is the ability to clone machines into a test environment like lab manager and isolate them from your production environment. I was very skeptical about running SQL server in production in VMware but i have not been able to find substantial evidence that performance is necessarily degraded if you plan your infrastructure carefully.

  8. We host about 200 SQL Server database on physical instances with big RAM and CPU.
    Why would we vmware?

    • Wade – what is “big RAM and CPU” exactly, and what’s your utilization on those? Often we find that the servers are overprovisioned, and companies can save a ton of money by centralizing/virtualizing their SQL Servers.

      • Initially we ARE overprovisioned. We stand up one physical server. 96GB RAM, 4 hexcore CPUs. Then we start loading new databases on the default instance. We start seeing IO issues are 150 – 200 databases depending. Then we build another. Our problem is the extra licensing costs for that many virtual instances. Especially when MS wants you to go on software assurance if you want to vmotion them more than once every 90 days.

  9. time for an update to this so many things have changed..

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

css.php