Aruba & ProVision-based
1751972 Members
4603 Online
108783 Solutions
New Discussion

Load balancing interfaces - trunk vs lacp

 
hpbon
Advisor

Load balancing interfaces - trunk vs lacp

when bundling interfaces like this:

trunk <interfaces> Trkx trunk
vs
trunk <interfaces> Trkx lacp

What is the difference?

If I am trunking two 1 Gbps, will I then get 2 Gbps?

Regards, Lars.

1 REPLY 1
parnassus
Honored Contributor

Re: Load balancing interfaces - trunk vs lacp

The difference is that with first command you aggregate interfaces using a Non-Protocol (so a non-IEEE 802.3ad trunking protocol) which is static only and operates independently of a specific trunking protocol (and so it does not use a protocol exchange with the other end device)...with second you aggregates them using the IEEE 802.3ad (LACP) Standard which dynamically (or statically) manages them and let the involved Switches to auto-negotiate...so LACP performs advanced (L2, L3 or L4-based) load balancing as Non-Protocol can't (it simply uses SA/DA method of distributing outbound traffic through its member ports without negotiation how to handle the traffic with other end).

Generally if your Switches are both of the same manufacturer (let me say HP/HPE Aruba, as example) you should go with LACP Port Trunking to benefit of its options (like using and additional Failover Link defined as standby link over the maximum number of member links already involved in the Port Trunking group).

More detailed overviews, descriptions and pro/cons comparisons about LACP (IEEE 802.3ad) versus Trunk (Non-Protocol) can be easily found everywhere but, first (exactly because you probably have HP ProCurve or HPE Aruba units), have a look at any HPE ArubaOS-Switch Management and Configuration Guide published more or less recently: generally there you will be able to find an entire Chapter about "Port Trunking" which explains and shows both LACP and Trunk port trunking modes.

Regarding the question about the aggregated total bandwidth there isn't a single answer: generally a single-host-to-single-host traffic session can't span over the single links saturating the whole bandwidth a Port Trunking provides...it saturares the single bandwitdh on the link it starts to use (and it uses one at time) but in a multiple-hosts-to-single-host (or multiple-hosts-to-multiple-hosts) scenario you have more concurrent sessions and those sessions will be distributed accordingly on all Port Trunk member links de facto using all the available bandwidth (if the destination host is capable of sustaining that traffic).

If you consider the Switch-to-Switch interlink scenario you can recognize that it looks very similar (if not better, in terms of traffic re-distribution) to the case multiple-hosts-to-multiple-hosts scenario since all hosts of the first Switch can speak with all hosts of the second Switch so using extensively the trunk between those Switches (and so the traffic will be distributed quite optimally against the Port Trunking member links)...obtaining an enhancement on the throughput and gaining resiliency in case of a link failure.


I'm not an HPE Employee
Kudos and Accepted Solution banner