- Community Home
- >
- Networking
- >
- Switching and Routing
- >
- HPE Aruba Networking & ProVision-based
- >
- Load balancing interfaces - trunk vs lacp
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2017 12:25 AM
03-29-2017 12:25 AM
Load balancing interfaces - trunk vs lacp
when bundling interfaces like this:
trunk <interfaces> Trkx trunk
vs
trunk <interfaces> Trkx lacp
What is the difference?
If I am trunking two 1 Gbps, will I then get 2 Gbps?
Regards, Lars.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2017 12:55 AM - edited 03-29-2017 01:10 AM
03-29-2017 12:55 AM - edited 03-29-2017 01:10 AM
Re: Load balancing interfaces - trunk vs lacp
The difference is that with first command you aggregate interfaces using a Non-Protocol (so a non-IEEE 802.3ad trunking protocol) which is static only and operates independently of a specific trunking protocol (and so it does not use a protocol exchange with the other end device)...with second you aggregates them using the IEEE 802.3ad (LACP) Standard which dynamically (or statically) manages them and let the involved Switches to auto-negotiate...so LACP performs advanced (L2, L3 or L4-based) load balancing as Non-Protocol can't (it simply uses SA/DA method of distributing outbound traffic through its member ports without negotiation how to handle the traffic with other end).
Generally if your Switches are both of the same manufacturer (let me say HP/HPE Aruba, as example) you should go with LACP Port Trunking to benefit of its options (like using and additional Failover Link defined as standby link over the maximum number of member links already involved in the Port Trunking group).
More detailed overviews, descriptions and pro/cons comparisons about LACP (IEEE 802.3ad) versus Trunk (Non-Protocol) can be easily found everywhere but, first (exactly because you probably have HP ProCurve or HPE Aruba units), have a look at any HPE ArubaOS-Switch Management and Configuration Guide published more or less recently: generally there you will be able to find an entire Chapter about "Port Trunking" which explains and shows both LACP and Trunk port trunking modes.
Regarding the question about the aggregated total bandwidth there isn't a single answer: generally a single-host-to-single-host traffic session can't span over the single links saturating the whole bandwidth a Port Trunking provides...it saturares the single bandwitdh on the link it starts to use (and it uses one at time) but in a multiple-hosts-to-single-host (or multiple-hosts-to-multiple-hosts) scenario you have more concurrent sessions and those sessions will be distributed accordingly on all Port Trunk member links de facto using all the available bandwidth (if the destination host is capable of sustaining that traffic).
If you consider the Switch-to-Switch interlink scenario you can recognize that it looks very similar (if not better, in terms of traffic re-distribution) to the case multiple-hosts-to-multiple-hosts scenario since all hosts of the first Switch can speak with all hosts of the second Switch so using extensively the trunk between those Switches (and so the traffic will be distributed quite optimally against the Port Trunking member links)...obtaining an enhancement on the throughput and gaining resiliency in case of a link failure.
I'm not an HPE Employee
