Switches, Hubs, and Modems
1751787 Members
5189 Online
108781 Solutions
New Discussion юеВ

Intel Proset LACP load balancing

 
james08
New Member

Intel Proset LACP load balancing

Hello All

I have setup a NIC team in intel proset with 802.3ad LA mode to try to improve the speed in which our weekly network backup takes place conected to a Procurve 4108gl.

All seemed fine however the backup seems to take just as long as before. When looking in windows at how much the nic's have sent and transmitted one is much higher.

Both links are Gigabit links connected to j4863A module.

NIC1 has sent 2,097,650 packets and received 1,088,400,644 packets

NIC 2 has sent 293,440,320 packets and only recevied 246,826,800 packets

HP ProCurve Switch 4108GL# show trunk

Load Balancing

Port | Name Type | Group Type
---- + -------------------------------- --------- + ----- -----
H5 | not assigned 100/1000T | Dyn1 LACP
H6 | not assigned 100/1000T | Dyn1 LACP

I have read in the Proset help:

NOTE: For this type of team, you must configure your switch for LACP channeling be before you create your team. Intel PROSet communicates the LinkAggrJoinMethod advanced setting to the switch only when the team is created or reloaded. If you do not configure the switch first, the switch defaults to Maximum Adapters instead of the Maximum Bandwidth.

However I unable to find any procurve documentation on setting "Maximum Bandwidth" rather than "Maximum Adapters"

Thanks for any help in advanced
James
2 REPLIES 2
Richard Brodie_1
Honored Contributor

Re: Intel Proset LACP load balancing

"I have setup a NIC team in intel proset with 802.3ad LA mode to try to improve the speed in which our weekly network backup takes place."

Link aggregation is a bit granular. It will smooth out the load better, the more sessions that are running. If you are backing up a lot of clients concurrently it would help; a single backup stream, it's not likely to help at all.
Paul Boven
Occasional Advisor

Re: Intel Proset LACP load balancing

LACP links are designed to carry all traffic from a particular source to a particular destination over just one of the link members. This is done in order to prevent re-ordering, which could happen if packets belonging to a single TCP session were to go over all the members of your trunk: if one of the links has less load, a queue that is less full, then packets on that link might overtake packets that happen to have taken the busier link. TCP tends to get rather confused about out-of-order packets, so LACP was specifically designed to keep packets in order.

LACP accomplishes this by using a hash of the source and destination MAC address (generally) to decide on which trunk member to transmit a particular packet: hence packets between a particular pair of machines will always take the same link inside the trunk.

Therefore, LACP only really increases your capacity if you have many different senders and/or many different receivers. If there is just a small number of servers backing up to a single backup-server, you can expect very unequal loading of the links.

As your backup server is the receiving side, it is the switch that decides which packets will be sent over which trunk member, not the Intel NIC.

In the past I've even encountered a case where trunking didn't seem to work at all, but further investigation showed that all machines in question happened to have 'even' numbered MAC addresses, by coincidence.

Unfortunately, LACP is the only game in town nowadays so if this is indeed why your performance is not as expected, your only hope might be to migrate that link to 10G.
VLBI - it's a fringe science