- Community Home
- >
- Servers and Operating Systems
- >
- BladeSystem
- >
- BladeSystem Virtual Connect
- >
- LACP in Vswitch and Virtual Connect
-
- Forums
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
07-07-2010 08:08 AM
07-07-2010 08:08 AM
LACP in Vswitch and Virtual Connect
Francisco was working with a customer that had questions regarding Virtual Connect, vSwitch, & the 1000V soft switch from Cisco:
*****************************************************************************************
Hello experts, I hope you can help us with this issue. We have a customer with vmware and 1000V. They want to make LACP between their vswitch and the core switch. Thinking about this we found that it would be necessary 4 Vnets in order to get 40 gbps between both VConnect flex 10 modules and the core switch (4 vnets with active-active links). We thought of two Vnets (one in each vc module) but this would need LACP between virtual connect and the core switch, so LACP in the vswitch wouldn’t be possible.
My question is if this is a supported scheme (LACP between the core switch and the vmware switch, with 4 vnets in tunneling mode), as it seems the only possible option.
We could also team flex 10 ports to the vswitch , but this would only receive one link, so we would have less bandwidth. Do we have documentation about how this teaming is done? Our colleagues from networking have asked us for some documentation.
Thank you very much for your help and best regards,
**************************************************************
Vincent started a lively discussion:
***********************************************************
Vincent said:
LACP is a point-to-point protocol between one layer 2 device and another directly connected. So no you couldn’t do LACP between a vSwitch and a core switch “through” a Virtual Connect module.
What you can do is LACP between Virtual Connect and the core switch and NIC teaming in the vSwitch as mentioned in your last paragraph. You can get more details on such a config in the VC Ethernet cookbook http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01990371/c01990371.pdf, specifically scenarios 2:3 (mapped VLANs) and 2:4 (tunneled VLANs)
Then Obaid joined in:
For NIC teaming, you can also consider dividing your portgroups onto separate uplinks. For example, if you have 10 Portgroups on a vSwitch, configure NIC teaming on the portgroups such as:
1-5 Portgroups have uplink1 as active and uplink 2 as Standby.
6-10 Portgroups have uplink2 as active and uplink1 as Standby.
This way the bandwidth from both the NICs will be utilized while maintaining the redundancy.
Also, found one informative article on using different NIC teaming policies with/without Link aggregation:
http://blog.scottlowe.org/2008/07/16/understanding-nic-utilization-in-vmware-esx/
Now it was Guido's turn:
Further note that you cannot aggregate uplinks from different VC modules via LACP to a single trunk – this will only work per VC module. So if you have 4 x 10 Gb uplinks total, and assuming you’re using two VC module each with two uplinks, you’ll only be able to trunk 2 x 10 Gb on each VC module.
Carlos expressed his ideas and questions:
What Fran means is if we can do LACP between the core switch and the vmware switch (Cisco Nexus 1000V not VMWare vswitch) by having 4 server nics connected to 4 Vnets in order to get 40 gbps. Is there any possibility to get 40Gbps (active) with the use of 4 uplinks only?
And once again Vincent provided his thoughts:
You can make use of the 40Gb but not the way you describe in the picture. In particular you cannot have LACP between the ESX server and Virtual Connect (whether you’re using Nexus 1000v or VMware vSwitch does not matter), and you cannot have a single LACP aggregate across the 2 VC modules to the core switch.
You could define multiple port groups as mentioned previously in the thread with a different primary NIC each and manually balance the VMs across them. No single VM will be able to use more than 10Gb but if you balance the VMs intelligently (assuming you have a good number of VMs on that box), you will be able to use the 40Gb in both directions
*****************************************************************************************************
Certainly a great discussion. Does this help you? Any other thoughts on the subject?
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2021 Hewlett Packard Enterprise Development LP