BladeSystem - General
1827311 Members
2357 Online
109961 Solutions
New Discussion

LACP in Vswitch and Virtual Connect

 
chuckk281
Trusted Contributor

LACP in Vswitch and Virtual Connect

Francisco was working with a customer that had questions regarding Virtual Connect, vSwitch, & the 1000V soft switch from Cisco:

*****************************************************************************************

Hello experts, I hope you can help us with this issue. We have a customer with vmware and 1000V. They want to make LACP between their vswitch and the core switch. Thinking about this we found that it would be necessary 4 Vnets in order to get 40 gbps between both VConnect flex 10 modules and the core switch (4 vnets with active-active links). We thought of two Vnets (one in each vc module) but this would need LACP between virtual connect and the core switch, so LACP in the vswitch wouldn’t be possible.

 

My question is if this is a supported scheme (LACP between the core switch and the vmware switch, with 4 vnets in tunneling mode), as it seems the only possible option.

 

We could also team flex 10 ports to the vswitch , but this would only receive one link, so we would have less bandwidth. Do we have documentation about how this teaming is done? Our colleagues from networking have asked us for some documentation.

 

Thank you very much for your help and best regards,

**************************************************************

Vincent started a lively discussion:

***********************************************************

Vincent said: 

LACP is a point-to-point protocol between one layer 2 device and another directly connected. So no you couldn’t do LACP between a vSwitch and a core switch “through” a Virtual Connect module.

What you can do is LACP between Virtual Connect and the core switch and NIC teaming in the vSwitch as mentioned in your last paragraph. You can get more details on such a config in the VC Ethernet cookbook http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01990371/c01990371.pdf, specifically scenarios 2:3 (mapped VLANs) and 2:4 (tunneled VLANs)

Then Obaid joined in:

For NIC teaming, you can also consider dividing your portgroups onto separate uplinks. For example, if you have 10 Portgroups on a vSwitch, configure NIC teaming on the portgroups such as:

1-5 Portgroups have uplink1 as active and uplink 2 as Standby.

6-10 Portgroups have uplink2 as active and uplink1 as Standby.

This way the bandwidth from both the NICs will be utilized while maintaining the redundancy.

 

Also, found one informative article on using different NIC teaming policies with/without Link aggregation:

 

http://blog.scottlowe.org/2008/07/16/understanding-nic-utilization-in-vmware-esx/

Now it was Guido's turn:

Further note that you cannot aggregate uplinks from different VC modules via LACP to a single trunk – this will only work per VC module. So if you have 4 x 10 Gb uplinks total, and assuming you’re using two VC module each with two uplinks, you’ll only be able to trunk 2 x 10 Gb on each VC module.

Carlos expressed his ideas and questions:

What Fran means is if we can do LACP between the core switch and the vmware switch (Cisco Nexus 1000V not VMWare vswitch) by having 4 server nics connected to 4 Vnets in order to get 40 gbps. Is there any possibility to get 40Gbps (active) with the use of 4 uplinks only?

And once again Vincent provided his thoughts:

You can make use of the 40Gb but not the way you describe in the picture. In particular you cannot have LACP between the ESX server and Virtual Connect (whether you’re using Nexus 1000v or VMware vSwitch does not matter), and you cannot have a single LACP aggregate across the 2 VC modules to the core switch.

You could define multiple port groups as mentioned previously in the thread with a different primary NIC each and manually balance the VMs across them. No single VM will be able to use more than 10Gb but if you balance the VMs intelligently (assuming you have a good number of VMs on that box), you will be able to use the 40Gb in both directions

*****************************************************************************************************

 

Certainly a great discussion. Does this help you? Any other thoughts on the subject?