BladeSystem Forums Have Moved here
To make BladeSystem information easier to find, we have moved the BladeSystem forums here, to Servers and Operating Systems.
Showing results for 
Search instead for 
Do you mean 

nic bonding & lacp - Linux

Valued Contributor

nic bonding & lacp - Linux

I have some BL460c's with GbE2c switches in IC bays 1 & 2.  I'm using two blades as test systems.  I have the bond setup alright (RHEL 6) and it is working.  The cross links between the two GbE2c's are disabled.  I have a single Cisco 3750 as the uplink from both GbE2c's.  My question is about the lacp.  Do I need to:


1.  Enable the cross-links

2.  Link bond the internal ports on the GbE2c's?  That is for a server in bay 5, create a link bond between the port 5 of the GbE2c in IC bay 1 and port 5 of the GbE2c in IC bay 2?

3.  I know I need to link the external uplink ports and I'm pretty sure this also needs the cross-links enabled?



Trusted Contributor

Re: nic bonding & lacp - Linux

LACP requires both ends of the cables connect to the same device.
This would be the NIC on the Blade and the GbE2c on the enclosure
The GbE2c on the enclosure and the Cisco 3750

The problem is, you have 2 GbE2c switches and as far as I am aware, there is no way to merge them into 1 logical switch like there is with vPC (Nexus), VSS (Catalyst), IRF (HP 3COM), Virtual Chassis (Juniper), etc etc.

The cross links enable east/west traffic but again, I don't think they can be used to merge the 2 switches into one. I could be wrong here though as I don't have a ton of hands on experience with those Switches.

Now if you wanted to connect multiple 1Gb connections from a single GbE2c to a single 3750, that should work just fine.
And then do it again on the other GbE2c.

At the Server level, you would want to use either Failover or TLB (Transmit Load Balancing) Bonding modes in Linux.
Valued Contributor

Re: nic bonding & lacp - Linux

I was talking with an HP sales engineer, and came to the same conclusion.  I have on hand four 1/10Gb-F switches.  So, the plan was to replace the GbE2c's with those.  Each blade has the two on-board NICs and a mezzanine NIC.  He was telling me I can set that up with two bonds on the blades and an LACP group from each switch to the Cisco and that should work in an active-active state.  So, that is the plan.  I need to upgrade the firmware on the 1/10Gb-F switches and then schedule some downtime to replace the GbE2C's and run through the Flex setup.


Thanks for the reply!  :)

Trusted Contributor

Re: nic bonding & lacp - Linux

Brad, you will have the same limitation with VC modules as with the GBe2cs when it comes to LACP.

If you are in the USA, what was the Sales Engineer's name? I work on one of the PreSales teams in Los Angeles.
Valued Contributor

Re: nic bonding & lacp - Linux

I'm reluctant to identify him on a public board, but his first name is Sam.  We did talk a little more and figured out that even though we could get the outbound bonds to work in an active-active configuration, the inbound traffic will still be limited to one link.  That won't solve the problem which is getting the indexes or whatever (I'm not all that hadoop savy) from hadoop on the DL380s to the BL460c's.  It looks like we'll have to wait until we can upgrade the blades and the switches to get more bandwidth.


Thanks for the help!