HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem Virtual Connect
cancel
Showing results for 
Search instead for 
Did you mean: 

When to use shared uplinks for LAG vs ethernet LAG?

 
kheller2
Frequent Advisor

When to use shared uplinks for LAG vs ethernet LAG?

I have some confusion as to when creating "shared uplink" sets is appropriate vs just using multiple flex uplinks in the ethernet networks configuration of the flex-10 GUI. I've been through the cookbook a few times and still aren't sure when you should or should not use them and why. Let me give you a scenario: C7000 chassis with two Flex-10s in bay 1and 2. Out of each flex 10 we are using x2-x6 uplinks at 1Gb. So, bay1 goes to cisco 6500 switch P and bay2 goes to cisco 6500 switch S. The switches are not using cisco's virtual switch so each switch has to have its own LAG to the flex 10 (IE, I can't create an LACP LAG between th e two switches back to the flex). Now, I know there is some unresolvable bug where you can't have a LAG over 40Gb or it fails on the Flex-10 (according to the 2.10 and 2.12 release notes). We are using Oracle VM (yes we got it working on BL460G6s by rolling our own nic driver, PITA). Now, we would want the hypervisors to do their own VLAN Tagging manipulation, but we want the flexibility to pass down the native tags from the cisco w/o manipulation OR to use them if a blade needs to be baremetal. So, blade1 might have a full 10Gb 802.1q link but blade 2 I might need to only give it vlan120 (for example). At the moment we are using Active/passive failover nic bonding and would want to continue to do the same on the blades (thus 10Gb to flex10 to switch P, failover 10gb to flex 10 in bay 2 to switch S). I understand there are ways to allow the flex-10 to do the failover so we don't need to do bonding at all on the host OS side but there is some mention about that not being the best design in having traffic flow over the internal 10Gb link between flex modules, no mention of what happens when a flex module really goes south if the hosts are going to start using the other flex for their uplink. My apologies for the long winded example, but I figure it best to give too much information than not enough. This is similar to scenario #14, host based vlans, 801.1q and 802.3ad, but using both mapped and unmapped vlans and confused about "shared uplink set"