HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem Virtual Connect
Showing results for 
Search instead for 
Did you mean: 

Link Aggregation Control Protocol (LACP) not supported in downstream server nics?

Trusted Contributor

Link Aggregation Control Protocol (LACP) not supported in downstream server nics?

Ramon had a customer quesiton on LACP:






Anyone who can help with a customer query below:



Can you advise on the following excerpts I found on “HP Virtual Connect for the Cisco Network Administrator”:


-Virtual Connect supports NIC Teaming (or NIC bonding) on server NIC ports. For Windows on

x86, VC supports Network Fault Tolerance (NFT) and Transmit Load Balancing (TLB) but does not

support Switch-assisted Load Balancing (SLB). For Windows on Integrity, VC supports Network

Fault Tolerance (NFT), Transmit Load Balancing (TLB), and static Dual Channel with only two

NIC ports in the team, but does not support Switch-assisted Load Balancing (SLB). For Linux, VC

supports any NIC bonding type that does not require 802.3ad (static or dynamic using LACP) on

the server NIC ports.


-Virtual Connect does not support EtherChannel\802.3ad on the downlinks to server

NIC ports.


So what’s the point of having LACP on the Shared Uplink Sets and on the Cisco Switch?



Lee replied:





                LACP on the upstream ports gives you the ability to put multiple ports into a single logical pipe so you can aggregate the bandwidth.  Once you create a SUS on both modules and create A-side and B-side networks on the modules, you can assign those networks to the blades and have an active/active configuration.  The point of having LACP in the VC module is to have the ability to create LACP bonds to the upstream switch, which gives the benefits of additional redundancy if one of the ports in that trunk goes down.


Your customer may be used to creating a LAG from the upstream switch to the individual NIC on a rack server.  This is all done with switch protocols and NIC drivers; Since VC is in the mix, the LAG will not work the same way due to various technical reasons.  That’s why we almost always recommend a TLB/FT NIC team with an active/active VC configuration.




Any other comments for Ramon?