BladeSystem Virtual Connect
Showing results for 
Search instead for 
Did you mean: 

vPC uplink cabling along with LACP groups to Upstream Switches.

Trusted Contributor

vPC uplink cabling along with LACP groups to Upstream Switches.

Joe was looking to help a customer:




I have a customer planning a vPC installation.  They would like a 40Gb uplink, i.e. 4 10Gb uplinks in LACP trunk.  Question:  Is it possible to take 2 uplinks from a FlexFabric Bay to one up stream switch and another two uplinks in the second FlexFabric Module to a second upstream switch and then combine all 4 uplinks into one LACP trunk.  Normally, it is one uplink per bay to one upstream switch and then combine the  uplinks into a LACP trunk.


Like this:

               Two uplinks from bay 1, X3 and X4 going to upstream switch 1

               Two uplinks from bay 1, X5 and X6 going to upstream switch 2

               All four uplinks is one LACP trunk

               The uplinks will all be 10Gb each

               Will this be a vPC configuration?

               Repeat for bay 2

               Create a A/A VC configuration

               Will this be a A/A 40Gb or 80Gb

               This is also a stack enclosure domain.




The discussion:




From Mike:

I do not understand why a customer would do that. If A switch were to fail they lose half their bandwidth on both VC modules.


From Dan:

As opposed to Half their overall bandwidth AND cause NIC teaming to kick in at the host level? 

If you go straight from VC FF 1 to Switch 1 and then from VC FF 2 to Switch 2, and then Switch 1 dies, all Server Bays will lose connectivity on NIC 1 (assuming Smartlink is on since A/A)

So now you lose half your bandwidth AND your NIC teaming kicks in.

What happens if the NIC Team wasn’t setup properly?

The entire server goes offline.


From Dave:

Good question, but connection between two switch configured with vPC(Nexus) or IRF(HP 3Comm) is supported. 

Smartlink should only shutdown the downlinks when up uplinks on one bay is disabled.


And from John:

This configuration will work just fine. The key is that the customer has  vPC configured between their two switches. You will have 2 - 40Gb SUS’s in A/A so a total of 80Gb bandwidth. If you lose switch 1 you would lose 20Gb of bandwidth on each VC module so now total bandwidth is 40Gb but the main point is that smartlink will not kick in because you still have 2 active uplinks from each VC module going to switch 2. So as Dan pointed out you would not lose connectivity on nic 1 so the teaming app doesn’t have to do anything it stills has 2 nics with active uplinks so it’s fat dumb and happy even though you just lost an entire switch.




Any other thoughts or input on this subject?