HP Virtual Connect Review

We originally rolled out our HP C-Class blade infrastructure with Cisco network & Brocade SAN switches, and I’ve blogged about some of the challenges of lots of blade switches. Today, I’m going to talk about how the HP Virtual Connect equipment solved a lot of those problems for us and gave us a more flexible, easier to configure blade system.

When blades first started rolling out, we had two ways of connecting them to networks and storage:

  • Switches – just like regular switches, except they slide into the back of the blade chassis. They have internal wiring to each blade, and a few network ports in the back to connect to the rest of the datacenter’s switches. These ports can be used as uplinks, or other devices can be plugged in as well, like iSCSI storage. This solution is hard to maintain because it takes specialized Cisco knowledge (or whoever’s switches are used) and they’re not typically integrated with the blade chassis management.
  • Pass-throughs – a device in the back of the blade chassis that just had one network port for every single blade with a network connection. In the case of a fully populated C-Class chassis, that’s 32 network ports just for the 2 onboard NICs in each server, let alone any additional mezzanine NICs. This solution is much cheaper in terms of blade equipment, but it’s a cabling nightmare, and every time the admin needs to change a network port, they have to walk into the datacenter and manage the cabling.

HP’s Virtual Connect offers a new hybrid solution that takes the best of both, and offers some new abilities that aren’t available with either of the previous architectures.

Virtual Connect is a Smarter Passthrough

In a nutshell, the Virtual Connect module dynamically passes any network port’s traffic through to another network port. It’s a smarter passthrough that can aggregate traffic from multiple servers into a single uplink or multiple uplinks.

VMware administrators will see Virtual Connect management as something extremely similar to the network management built into VMware. Think of the blade chassis as being the VMware host: a couple of network cards can be configured with trunking to support multiple guests, each with their own individual VLANs. The Virtual Connect gear acts just like a VMware host would, adding the appropriate VLAN tags to traffic as it exits the VirtualConnect module and goes up to the core network switches.

Virtual Connect can also handle VMware servers inside the chassis, passing even VLAN-trunked traffic through to the servers. Be aware that this is does require some deeper network knowledge, but shops running VMware hosts can easily handle this. I’ve done VMware administration part-time at our shop for the last couple of years, and I was easily able to configure Virtual Connect for VMware hosts.

Here are a few links I found really helpful with Virtual Connect and VMware:

When two Virtual Connect modules are plugged side-by-side into the same blade chassis, they have a built-in hard-wired 10gb crossconnect link between them. This allows for some amazing failover configurations. We wired both modules up to the same datacenter core switch, and set up a single virtual uplink port across both Virtual Connect Modules. The Virtual Connect will automatically push all of the traffic through one side by default, but if that uplink fails, the traffic will automatically switch over to the other module’s uplink – completely seamlessly to the blade server. That’s something we couldn’t even do with our Cisco gear.

Virtual Connect beats regular passthroughs in another way: VC dramatically reduces the amount of cabling required.  Set up the Virtual Connect uplinks once with just one or two uplink cables per switch, and only add additional uplinks when performance requires it.  Instead of two uplink cables per server with traditional passthrough solutions, Virtual Connect requires as little as two cables per sixteen servers!  Of course, most shops will opt for at least a couple of additional cables for redundancy and performance, but it’s an option instead of a 32-cable requirement.

Virtual Connect is a Simpler Switch

Like the rest of our Wintel server management team, I don’t know anything about managing Cisco switches, and I’m not about to learn at this stage in my life. Therefore, I was exceedingly happy to open the Virtual Connect page and see this:

HP Virtual Connect user interface thumbnail

The Virtual Connect web user interface looks and feels exactly like the rest of HP’s management tools like System Insight Manager, the iLO2, and the C7000’s Onboard Administrator. Server managers will immediately feel comfortable with the wizard-based UI that can be used without any training. If you’ve managed HP servers, you can manage HP Virtual Connect.

That’s not to say that users shouldn’t read the documentation carefully when deciding the initial infrastructure: like switches, the Virtual Connect modules can do some powerful stuff, but it takes planning and forethought.

Faster Rollouts

Most of our blade network connections consist of a few simple profiles:

  • Basic server – two network cards both on the server subnet, using failover between the two
  • Clustered server – the basic server, plus two network cards on a heartbeat subnet
  • iSCSI server – the basic server, plus two network cards on an iSCSI subnet
  • VMware server – a specialized configuration with traffic from multiple VLANs

We roll out these same types of servers over and over, but with conventional switchgear, every server rollout was like reinventing the wheel. We had to double-check every network port, and human error sometimes delayed us by hours or days.

Virtual Connect brings a “Profile” concept to switchgear: we can set up these basic profiles, and then duplicate them with a few mouse clicks. A junior sysadmin rolling out a new VMware blade doesn’t need to understand the complexities of trunked VLAN traffic, a dedicated VMotion nic, and so on – they just use the custom VMware profile we set up ahead of time, and all of the network ports are configured according to our standards.

Since the Virtual Connect infrastructure is managed by the same staff who do blade implementations and rollouts, there’s no delays waiting for the network team, no failures in communication, and no finger-pointing when configurations go wrong. Blade rollouts are handled entirely by one team start to finish.

Easier Recovery from Server Hardware Failures

We haven’t implemented boot-from-SAN yet, but with the VC infrastructure, I can finally see a reason to boot VMware and Windows servers from the SAN. Virtual Connect manages the MAC addresses of each network card and the WWN addresses of each HBA, remapping them to its own internal database (or the company’s chosen list).

In the event of a blade hardware failure, like a motherboard, the system administrator can simply remap that blade’s network profile to another blade and start it up. The new blade takes over the exact same MAC addresses and WWN’s of the failed blade, and can therefore immediately boot from SAN using the failed blade’s storage and network connections! That gives administrators much more time to troubleshoot the hardware of the failed blade.

With this kind of flexibility, we can justify having a high-performance blade as a standby, ready to recover from any blade’s hardware failure. The cost on this is relatively low, since it acts as a standby for all of the blades in any chassis.

Easier Hardware Upgrades

Along the lines of hardware failure recovery, VC also allows for easier hardware upgrades & swaps. If the company goes with a standard blade hardware rollout (like all Intel 2-socket blades, or all AMD 2-socket blades), this means that hardware upgrades can be done in a single reboot:

  • Ahead of time, build a new blade with the desired new configuration (more or faster CPUs, more memory, etc). Burn it in and do load testing in a leisurely manner, making sure the hardware is good.
  • Shut down the old blade.
  • Using Virtual Connect, copy the old blade’s profile over to the new one. Takes a matter of seconds, and can be done remotely.
  • Boot up the new blade.

Taking this concept to an extreme, one could even use this approach for firmware upgrades. Upgrade a standby blade to the latest firmware, burn it in to make sure it works, and then do the hardware swap. (I wish I would have done this recently – I had a SQL blade firmware go wrong, but thankfully it was in a cluster.)

Simple Packet Sniffing Built In

When we run into difficult-to-solve network issues, sometimes we rely on our network team to capture packets going to & from a machine in question. I was pleasantly surprised to find that the Virtual Connect ethernet modules have the ability to mirror a network port’s traffic to any other network port. We can set up a packet sniffer on a blade, then use that blade as a diagnostic station when another blade is having network-related problems. For even more flexibility, we can take a laptop into the datacenter, plug it into one of the Virtual Connect’s external ports, and set that port up as the mirror.

Is this something the Cisco switchgear can do? Absolutely, but it’s not something that a Wintel server administrator can do with Cisco switches. I would never dream of trying to set that up on a Cisco, but with the HP, it only takes me a few mouse clicks in a web browser.

The Drawbacks of Virtual Connect

HP’s question-and-answer page about Virtual Connect points to one political challenge: the network team may not like bringing a new network technology into the datacenter. When given the option between buying their standard switches versus the new Virtual Connect switchgear, they’re probably going to prefer the former. For me, the important message was the ease of configuration, and getting everyone to see it as an extension of the blade system’s capabilities instead of an extension to the core network switch’s capabilities. Virtual Connect is a big piece of what makes blades faster and easier to roll out than conventional services. It’s a part of a larger picture, part of a new way to implement server infrastructures.

The second challenge is that to see the real benefit, organizations need to have Virtual Connect modules in every blade chassis. That way, administrators can transplant profiles and servers from one chassis to another, giving the best flexibility. To do that, chassis buyers need to take the first leap and buy Virtual Connect modules in their first blade chassis. Otherwise, they probably won’t go back and retrofit each pre-existing chassis with the Virtual Connect modules, especially since they’re more expensive than the traditional Cisco switches.

Finally, yes, the Virtual Connect modules are somewhat more expensive than the Cisco blade switches. It’s odd for me to think of something more expensive than Cisco, but having worked with both the HP Virtual Connect modules and Cisco switches, I can completely understand why they’re worth the additional price.

For me, the return on investment is clearly there: blades are all about faster rollouts, a more flexible infrastructure, and higher uptime. The HP Virtual Connect system delivers on all three of those goals, and I would recommend it for any shop building an HP blade infrastructure.

Want to Read More About My HP Blade Experiences?

Here’s a couple more related posts:

Previous Post
Don’t specify IP’s in your SQL connection string
Next Post
Changing Companies

18 Comments. Leave new

  • Hi Brent,

    Great article on HP Virtual Connects – nice and clear especially for a product that can take a while to get your head around. We have recently rolled out a couple of C7000 chassis with HP BL460C’s which are configured up for a mixture of VMWare ESX v3.5, W2K3 local and boot from SAN. So far so good.

    Cheers,

    Simon

    http://www.techhead.co.uk

    Reply
  • Great article Brent, there’s a lot of information you can get on Virtual Connnect that isn’t in the manuals.

    The HP Virtual Connect Ethernet cookbook is available at.

    http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01471917/c01471917.pdf

    More info including the fiber channel cookbooks are posted on http://www.hp.com/go/bladeconnect.

    Create an ID and login. Then look in the VC Interest List for files to download.

    Reply
  • Brent,
    Very nice write up! True that VC pushes the network out of the blade chassis as much as possible, which introduces some conveniences for a server team who has decided they will manage the network paradigm inside the blade enclosure, minimizing the involvement from the network team.

    The problem with this approach however is that with the proliferation of virtual machines, the data center networking operations and requirements are pushing farther and farther into the physical server — where the physical server is becoming more and more involved in data center networking — VMWare ESX is perfect example where the hypervisor provides a software based network for the virtual machines.

    So, like it or not, the network team overtime is going to have to get more involved with the networking operations inside the physical server hypervisor — (see the announcements Cisco is making at VMWorld 2008 this week).

    Once that happens, VC becomes that thing sitting between the virtual network and the physical network that is inconsistent with the rest of the network in terms of features, operations, management and troubleshooting.

    Moreover, in a completely virtualized data center the identity of a physical server is sort of irrelevant – and the features that maintain a physical servers identity become less important than the end-to-end consistency in network operations throughout the virtual data center.

    Cheers,
    Brad

    Reply
  • Brad – I hear ya, but it’s six of one, half dozen of another. Some shops will let the network team into the hypervisor to change network settings, but I suspect that a great deal won’t. And as you’d suggest, some shops will let the virtualization shops manage their own networks with Virtual Connect, and a great deal won’t. It’s a big gray area right now. I’ll be excited to see how it plays out in the coming years!

    Reply
  • Brent,
    Things are happening fast. Yesterday at VMWorld 2008, Cisco and VMware jointly announced that Cisco software will be powering the hypervisor networking in ESX 4.0 – what is now being called VDC-OS. The Cisco product name for this is Cisco Nexus 1000v.

    http://blogs.zdnet.com/BTL/?p=10073

    The Cisco Nexus 1000v is tightly integrated with VMware Virtual Center to simplify network provisioning for virtual machines and better orchestration of things like VMotion.

    I would argue that a great deal of shops will embrace this enhanced networking capability for virtual machines, and only some will not.

    Cheers,
    Brad

    Reply
  • Alright, Brad, time to put your money where your comments are – especially the part about “Things are happening fast.”

    Let’s gauge the number of ESX4/VCDOS licenses shipped versus the number of Nexus licenses shipped or whatever – you would have a better metric on that than I would, as long as we agree on it beforehand.

    I bet you a case of Francois Montand champagne (don’t worry, it’s only around $10/bottle, but it’s great stuff) that at the end of 2009, the Nexus implementations will be under 15% of VMware installations. I’m not talking about existing implementations, either – that’s not fair to you guys.

    Whaddya say? I would be really happy to get you that case, because I love the idea of virtualized switches, but I just don’t think it’s going to happen anywhere near “Fast”.

    Reply
  • Brent,
    This sounds like fun. You’re on!

    Let me make sure I am clear on the bet…

    If by the end of 2009 Cisco Nexus 1000v or Cisco VN-Link is licensed in less than 15% of ESX4/VDCOS enterprise deployments, you win. If 15% or more, I win? Sound good?

    Let’s decide who wins on December 1, 2009 – this way the winner can enjoy a case of champagne at Christmas for re-gifting and/or personal enjoyment 🙂

    Cheers,
    Brad

    Reply
  • Perfect! I agree. Woohoo! The chase is on…

    Reply
    • Hi Brent,

      For the record, you won this bet. I concede.
      According to my sources, as of December 2009, 15% of vSphere licenses had Enterprise Plus attached, which is required to purchase Nexus 1000V.

      As much as I want to believe that 100% of those 15% purchased Nexus 1000V, we all know that can’t be possible. So I concede this one.

      Cheers,
      Brad

      Reply
  • Gents, as a member of the HP Alliance Team at VMware…and frequent recipient of requests on how VirtualConnect integrates with the Nexus 1000v…I have a vested interest in your speculation. I’m looking forward to updates on the wager. Happy New Year and best of luck in ’09.

    Reply
  • Brent,
    Just a friendly reminder that I have not forgotten about our little bet here.

    Hope things are going well.

    Cheers,
    Brad

    Reply
  • Hi, Brad! That’s funny, I’ve thought about it too and wondered how the sales were going! This ought to be interesting.

    Reply
  • Marcus Quimby
    August 26, 2009 8:13 am

    I have been testing several different architectures with VMware. I like the features the Virtual connect provides; however I will fall back to your previous article. It is not possible to get enough throughput with an HP blade solution to maximize your investment with VMware. The best HP solution with the C series chassis is 1 10Gb ethernet mezzanines and 1 8Gb FC Mezz, but the chassis will not support more than 2 BL-c type 10Gb switches. Networking is a big limitation. If you are doing a large VMware implementation I would recommend going with DL785. The G6 model will start shipping soon. You can get adequate throughput (True dual 10Gb links) and it supports a larger memory footprint.

    Reply
    • Marcus,

      Have you tested Cisco UCS with VMware? If a large memory footprint interests you, Cisco UCS will soon be shipping a full-height blade that support 384GB in a 2 socket configuration.

      Cheers,
      Brad

      Reply
  • Marcus-
    I don’t know about the fact that you can’t maximize your investment with a blade scenario. Truly it depends on what our goals with virtualization are. If your goal is simply to put as many VM’s on a single host as you possibly can, then yes you are correct.

    But it depends on what your cost analysis is and your needs for virtualization. If your need is simply to virtualize as many machines on one piece of hardware as possible, then you could be correct. But to some companies, the hardware cost for doing so is prohibitive. And that’s not even getting into the reduced power and cooling costs.

    Matt

    Reply
  • All,
    I’m an opertion manager in the trenches. I have to meet today’s continous business demand for application solutions while focusing on cost (capex,opex.) So how do I reponsibly meet today’s demands while waiting for vendors to provide the perfect solution?

    How Can I use the HP Virtual Connect to meet today’s needs and then transition to the next “perfect” futue solution? We are currently doing it on 2 demensions; rack servers to blades, Single OS per server to VM’s. I need to start virtualizing the the I/O for reasons you are aware of.

    Please remember us “operations focs” have to keep working on the “house” (datac center) while it is in use 24/7/365. I need a balanced approach.

    Thanks

    Reply
    • Peter – there is no perfect future solution. It sounds like you’re asking for someone to build a long-term strategy for your business IT needs. That’s outside of the scope of what you can get in a blog comment. You might consider engaging a consultant with experience building these types of solutions. If you need help finding one, email me at brento@brentozar.com and I can put you in touch with one.

      Reply
    • Peter, you may want to take a look at HP’s new Flex-10 Virtual Connect. It is an upgrade from HP’s original VC, and adds huge advantages:

      1- As for “that thing” sitting between your server, and network, the Flex-10 interconnects talk to the Nexus 1000v.
      2- Flex-10 interconnects in combination with the integrated or mezzanine Flex-10 blade adapters, gives you the equivalent of 8 NICs on a single adapter, shared between two physical ports. Each port has 10Gb speed, shareable between it’s 4 adapters, and adjustable for each NIC in 100k increments. So, while the default is 2.5Gb per NIC, you can adjust that up and down to a max of 10Gb per port.
      3- With the Flex-10 interconnects, you only need TWO to accommodate 16 blades in a c7000 enclosure, each with 8 Flex-NICS per blade already integrated. (You can add an additional Flex-10 mezz card per blade, and two more Flex-10 interconnects for an ADDITIONAL 8 Flex-NICs per blade).

      This completely mitigates the I/O limitations of other/previous blades/interconnects in a VMware based environment.

      The G6 blades are out, and as memory goes, a c-Class Half-Height blade (of which you can get 16 in a single chassis), will give you up to 192GB of DDR3 RAM, and up to 256GB RAM with 4×6-core procs on a full height blade.

      HP was very forward thinking with the c7000 blade enclosure, and kept it as “open-architecture” as possible, which manages to embrace future technologies. It will accommodate up to 4 redundant fabrics, 16 blades, and the best management architecture I’ve seen (and I’ve worked with most vendor blade technologies).

      Good luck with your search.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}