To accommodate the performance and reliability demands of today’s workloads, VMware vSphere provides advanced networking capabilities that form a robust foundation for private cloud computing.
Two different vSwitches are provided in vSphere: Standard and Distributed. Both offer NIC teaming for load balancing and fault tolerance, intuitive VLAN support, and Cisco Discovery Protocol (CDP) for easy lights out datacenter management. Be sure to check out Jason Boche’s great overview of CDP.
vSphere Distributed Switch — Simple Network Management
The Distributed Switch adds advanced networking features to your virtual infrastructure, such as load-based teaming and private VLANs, and offers centralized port group management — eliminating the need to configure vSwitches and port groups individually on each host. vSphere administrators can also choose to go with a hybrid model, maintaining a Standard vSwitch on each host — typically for management — and leveraging a Distributed Switch for virtual machine traffic.
Here you can see a Distributed Switch that spans four ESX hosts, utilizing two physical NICs per host :
By connecting these physical NICs to multiple trunk ports, virtual machines benefit from network redundancy and load balancing while making it trivial to create port groups for any VLAN required. Configurations are instantly propagated across the cluster, boosting efficiency and minimizing human error.
Also shown above is the CDP information for one of the physical NICs, handy for identifying connected physical switch ports — no need to convince a coworker to visit the datacenter and help trace cables!
Hyper-V NIC Teaming Update
It’s been a while since I last wrote about the state of Hyper-V NIC teaming, some readers may be curious to know if anything has changed. Let’s start our investigation by reviewing the latest official Microsoft support policy:
Since Network Adapter Teaming is only provided by Hardware Vendors, Microsoft does not provide any support for this technology…
Since Microsoft Hyper-V Virtualization is a new technology, we recommend that you thoroughly test your teaming solution in a test environment prior to deploying into Production.
Looks like it’s up to the end user to put this puzzle together. Fortunately, enthusiastic bloggers have stepped up to provide some of the missing guidance to help decide which combination has the best chance of success. Hyper-V expert Hans Vredevoort begrudgingly admits:
Now that Windows Server 2008 R2 SP1 was published almost a month ago, I have attempted to evaluate the consequences for this lovely combination.
There are just a few main elements to consider before attempting to bring robust private cloud networking capabilities to the Windows hypervisor:
- Windows Server service pack and hotfixes
- Third-party network drivers
- Third-party network teaming drivers
- VLAN requirements, and at what level in the stack to implement
[Math quiz: calculate the number of possible combinations administrators must consider.]
What’s all the fuss? Just download the latest versions of everything and go. Not so fast…
Experts Offered No Immunity
Stu Fox, Microsoft virtualization expert and VCritical reader, recently shared his experience with this very issue — encountering an outage after a simple driver update:
Hope for the Future?
It’s hard to say if Microsoft will ever integrate NIC teaming capabilities directly into the Windows platform; the next version is expected sometime next year. For now Hyper-V users will just have to rely on the upcoming Virtual Machine Manager 2012 release, which will no doubt introduce complexity abstraction to VM networking in an effort to bring it closer to the simplicity vSphere administrators have enjoyed for years.
Is it any wonder Gartner concludes that Hyper-V is under-performing?
I have been teaming NIC’s in Hyper-v for years. It has not been an issue. If you are talking about teaming them inside the guest, no I don’t think that will work, but what would that accomplish anyway, if the underlying NIC(s) are teamed.
Teaming works fine with Hyper V. Dont think of it as Hyper V teaming its Windows OS teaming really.
Long before Hyper V teaming of Nics on Windows servers has been working fine. I am talking Windows 2000 server on Compaq hardware back in the day, wit 2 or more 100meg NIC’s.
I use Broadcom 10gig NIC’s and use their BACS software to set it up before I even add the Hyper V role. That softare allows you to chop up a LACP Team of 10gig ports (2 or more) into various virtual ports.
Windows and Hyper V thinks its physical NIC’s, say 4 as an example. Then use them how you want, managment, live migration, virtuaul switch for VM’s etc. Setup right with dual switches, say Nexus, with LACP VPC’s you can get 20gig going full rate if need be.
Any Windows admin worth anything can setup NIC teaming. Dell, HP and IBM provide good documentation and support for this….regardless of Hyper V. NIC teaming happens all the time outside of Hyper V, say on a SQL server that wants NIC redundency and throuput.
We have been seeing issues that randomly a VM will stop responding, either the router can’t find it or it just isn’t answering. Sometimes it comes back and sometimes we have to move it to a different Hyper V server. Our setup is Dell, Hyper V, Windows 2008, Broadcom going to Nexus 2248’s. Any ideas why we would randomly have VM’s stop responding? And it isn’t all the VM’s on the chassis usually just 1 of up to 40 VM’s that are on a chassis.
Thanks
Gartner now concludes that Hyper-V is well on its way – quite a jump from the article posted:
http://www.gartner.com/technology/media-products/reprints/microsoft/vol2/article8a/article8a.html