Say we have a physical database server sitting over there in the rack. For the sake of discussion, let’s pretend it’s a slightly older model, a HP DL380 G2 with a couple of P3 CPUs, 4gb of ram, and six local 36gb drives. The storage is broken up into a mirrored pair of drives for C (the OS), and a four-disk raid 5 for the data and the logs. (No, that’s not best practices, but it was built before we came to work here, and you know how that last DBA was.)
It serves the databases for our help desk software. It’s a dedicated machine with no high availability solution: not only is it not clustered, but we don’t even have another DL380 G4 in the shop that we could replace it with if we had a hardware failure. If this box goes, we’re in trouble: we have to drop everything and rebuild it with whatever hardware is lying around the shop at the time.
To make matters worse, it’s SQL 2000, and it has some applications installed on it. It’s got some kind of help desk service on it that does something to the database, and nobody knows exactly how it works. We try not to look directly at it, because we’re afraid it will crash, and we’ll be out of luck.
It’s a dinosaur, and we need it to go away.
In A Perfect World, The Dinosaurs Don’t Fight Back
In a perfect world, we’d buy a new server, transition everything to SQL 2008, get the help desk vendor to do the installations, get the help desk group to test the application, and have a controlled, orderly migration over to the new platform.
How often does that happen?
All that costs money, and all that takes time – two things we can never seem to keep around the shop.
The help desk department won’t put any money in the budget because they’re happy with the performance. They don’t need it to go faster, they don’t need high availability (because SQL just works, right?) and they don’t want you to touch it because it might break.
We DBAs want to make sure that if the hardware fails (because sooner or later it will) we’ll be okay, and we won’t have to drop everything to rebuild something we don’t really understand. Unfortunately, we can’t get the budget money either, and even if we get the money, we don’t always get the time. We need the process to be seamless and fast, and maybe even nearly free.
Enter the Physical to Virtual (P2V) Process
If you haven’t done a P2V migration before, you’re going to think I’m crazy. You’re going to call this stuff a big fat lie, a bunch of vaporware that can’t possibly work. I know: I thought the same exact thing before I started doing it a few years ago. I still get a big grin on my face every time it works, because it’s such cool technology. As you read through the next couple of paragraphs, just bear with me and trust me – it really does work.
The process goes like this:
- Take a backup of the dinosaur
- Shut the dinosaur down
- Create a new virtual machine in a VMware ESX or Windows 2008 Hyper-V farm
- Restore the dinosaur’s OS and data to it
- Power on the new virtual dinosaur
By now, probably half of you have called BS. The other half of you are about to call BS when you read this next sentence: the new virtual dinosaur is going to need a completely different set of drivers for the virtual server’s storage, networking and video. Wow. How often have you been able to pull hard drives out of one model of server, slide them into a completely different model of server, and have everything work correctly?
Back in 2005 when I started doing this, it was rocket science, and it was scary. It went wrong more often than it went right, and I earned a lot of my gray hairs on lonely weekend nights in the datacenter. However, even when it goes wrong, it’s still not that bad, because we have our original physical server standing by. If the P2V process doesn’t work, we just power the physical server back on, and the data is untouched! Things keep chugging along the way they were, and we can try the P2V process again in a couple of weeks after we’ve recovered.
You, dear reader, are lucky. You have the good fortune to not be an early adopter the way I was, and you can buy a package off the shelf to do this stuff – or just download one, because there’s good free ones out there too. Search for P2V, and you’ll find a ton of reviews. (Disclaimer: the company I work for makes one of the products, but I’m not doing marketing today – just talking about your options.)
These solutions take P2V to the next level. Here’s how easy they make the process:
- Create a new virtual machine in a VMware ESX or Windows 2008 Hyper-V farm
- Install the agent on the current physical dinosaur
- Shut down SQL Server (but leave the OS running)
- Take a VSS snapshot of the dinosaur’s drives
- Copy the data over the network to the new virtual server
- Shut down the old physical dinosaur
- Power on the new virtual dinosaur
The cool part is that since they’re installed while Windows is running, and since they work while Windows is running, you avoid a lot of the hardware-specific driver issues.
SQL Server in VMware or Hyper-V? Are You Crazy?
I know: I talk to you folks out there, and you hate the idea of virtual SQL Servers. I feel your pain. But before you throw this idea out the window, stop and think about the alternative. Our options are:
- Keep the old physical dinosaur up, wait for it to die, and then run around in a panic
- Wait for budget money we’re never going to get
- Virtualize the server and have it perform better than the original hardware
Remember, even though it’s a virtual server and it may have some performance penalties, we’re still talking about virtualizing hardware that has already outlived its usefulness. When you move a SQL Server from an old P3-based system and a few gigs of ram onto a new Xeon-based system and maybe crank up the ram a little, performance will go up – not down – as long as you’re not heavily oversubscribing your virtual server farm. (That’s a separate discussion.)
Running virtual SQL Servers isn’t the best solution. The best solution is for me to be driving a Porsche 911 Targa while my army of junior DBAs manage my crisp, newly installed SQL Server 2008 boxes. But stop getting distracted from ideals that will never happen, stop trying to kill the dinosaurs with a cardboard sword, and do what you can with the tools you have.
We use platespin. It is worth every penny. http://platespin.com It is pretty sweet. It supports p2v, v2p and p2p.
Last year, we moved a customers with a couple hundred servers from one datacenter to another over a (very long) weekend. Although there is an option to support databases, we sneakernet’ed the full backups on a couple usb drives and then used logshipping so we didn’t have to send a couple TB’s over the wire. Not to mention we didn’t virtualize all of the SQL boxes.
I’ve heard similar good vibes about Platespin for a couple of years from VMware gurus.
The point about moving datacenters is really relevant – I love that way of lighting up a new datacenter. Odds are the old DC had slower machines anyway, and moving them to VMs will present performance improvements (as long as they’re the right kinds of servers to virtualize.)
I’m sitting here thinking of the servers I could virtualize and hardware we could recover by doing exactly this. Our case isn’t dinosaurs as much as it is having too much hardware for the task at hand. We’re using your Porche 911 when a Honda Civic would be perfectly adequate.
Yes…P2V does work well as I have experienced it first hand by pulling several older physical DMZ web servers over to a virtualized VMWare environment while the other box is up and running. Very cool technology…and thanks for sharing Brent;) Your blog is very useful and I am glad I have stumbled across a great SQL Server resource for the future;)
Thank-you for this article. We are migrating a number of V2V and P2V SQL servers to our datacentre and will keep this article close by. Unfortunately, the business was too nervous to virtualize our consolidated SQL 2005 production server so we are going to go ahead and migrate to new hardware. The plan is to platespin the OS drives and then reattach the SAN drive (where data files reside). We will stop the SQL services and set them to manual, platespin the OS drives to the new hardware, attach SAN storage then bring up SQL services. All drives including SAN will have the same labels and configuration. Please advise if this is approach is too simple and or if you can think of any issues we should be aware of. Of course a full backup will be taken as a safety net prior to the shutdown 🙂
FYI – We’ve had problems with low memory in the past and are increasing from 8 to 32gb. AWE is enabled. Currently the memory is configured like this: MIN 128mb MAX 6144. At times SQL has consumed all available memory and still ran slow. Should we allocated 1 – 2 gb for the OS and the rest for SQL? Also how would you configure the 32gb on the new server if it is just a SQL server and nothing else?
Theresa – if your team hasn’t used Platespin for this type of implementation before, I’d recommend testing it out on other (non-SQL) servers first. SQL Server does work pretty much the same as other servers, but you just want to be extremely careful. You also want to pay particular attention to the SAN HBA drivers and multipathing configurations.
Are these 32-bit or 64-bit operating systems? If you’re having memory problems and they’re 32-bit, then rather than using Platespin, I’d recommend building new x64 servers and attaching the SAN drives to those instead. Hope that helps!
Thanks for your article/blog…about p2v. I was asked to assist a vender do a p2v and have to admit..it’s the first time I came across this technology…though I have worked with VM servers in the past. It’s pretty amazing technology, p2v, v2p and p2p. Wow..thngs you miss when you only start doing support! You explained all very clear.
While the power savings from reducing hardware is much discussed, I’d like to see a white paper on the increased power and cooling required when you drive an existing server to its full potential from the industry standard level of utilization previously benchmarked, e.g. if the server with a power supply capable of 3 amps maximum previously was averaging 1.2 amps, what is the typical increase when a VM is added?
James – check with your hardware vendor. For example, HP has a power sizing tool that will give you numbers based on the CPU and memory loads of the host:
The numbers vary dramatically with your processor type, though, so any whitepaper would be obsolete by the time it got printed. Hope that helps!
hot migrations are pretty cool but you should still really stop transactional services while the P2V operation is carried out. I always prefer a cold migration where possible and it will usually complete a lot quicker than a hot migration!
Perry – yes, you can check the best practices for your P2V software and that’s usually mentioned.
Brent very good article. Have you ever P2V a SQL Clustered enviornment thats setup in an Active/Passive configuration? Thanks
Shearill – I don’t recommend P2Ving a cluster.
I was just wondering why you wouldn’t recommend P2V’ing a SQL Cluster? We are attempting this in a test environment at the moment.
Glenn – it’s just more work than I’d bother with. Why not build a new virtual node instead and just join it to the existing cluster?
Well we did actually manage to do it (P2V the existing cluster) but Sir Ozar I think you are right about this. It was a long drawn out process and I can’t shake the feeling that our contribution to the worlds most advanced themepark (VMWare 4.1) was a see-saw (Microsoft Cluster Services).
Glenn: I would give all my bits and bytes if you explained how you managed to p2v your MS SQL cluster. I am facing a similar challenge and I am not sure how to proceed. If you can just get me started it would be a great help.
Going forward we won’t be using MSCS anymore (within the context of VMWare anyway). I have managed to convince they guys here where I work that VMWares’ native HA capabilities (for hardware redundancy) coupled with Symantecs’ ApplicationHA product (for application level monitoring – which integrates nicely with VMWare as well) is more than sufficient for what this business requires. It also removes the complexity overhead of MSCS for those who will be left to manage it. You gain some, you lose some (with regards to functionality). In this case the gains sufficiently outweigh the losses for this business’ HA requirements. P2V’ing a microsoft cluster is a time consuming process which I personally wouldn’t do again.
In other words, Brent was right (“about the more work than I would bother with” bit) 😉
I am considering P2V’ing a two node SQL Cluster (2008 standard edition). I was wondering if you would now recommend P2V’ing a SQL Cluster? As I am on sql standard edition i believe i am limited to just 2 node SQL FCI.Therefore i cannot add a third virtual node .I suppose i could evict passive node and add in virtual node?
M – no, I don’t recommend P2V’ing a cluster. Go ahead and built a new cluster in virtualization, test it, feel confident in the way it fails over back and forth, and then log ship from the old cluster over to the new one to cut over.
Many Thanks for confirmation Brent