- Community Home
- >
- Networking
- >
- Switching and Routing
- >
- Aruba & ProVision-based
- >
- Re: LACP ProCurve - Linux
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-30-2010 04:08 AM
09-30-2010 04:08 AM
LACP ProCurve - Linux
Hello
I have some devices with Linux kernel 2.6.32, and I need to do a bonding to join 4 NICs. Such bonding would link a ProCurve 8212zl. My question is, would you know if current LACP protocol on ProCurve will fit the 802.3ad of bonding mode 4? I have read about some incompatibilities, but they are from 2007, and maybe now it would work fine.
Thanks in advances
Best Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2010 07:57 AM
10-04-2010 07:57 AM
Re: LACP ProCurve - Linux
Yes it should work :)
HTH
Gerhard
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2010 02:46 AM
10-19-2010 02:46 AM
Re: LACP ProCurve - Linux
Hi,
As already said yes it should work.
Bonding mode 4 = 802.3ad dynamic trunking.
http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding
For HP Procurve switchs, the distribution algorithm is SA/DA. It will use couple os SrcMAC and DstMAC to load-balance the conversation.
Here is the command to configure LACP in active state :
- interface 'port' lacp active
all ports need to be identical, copper ports or fiber ports, 100 or 1000 ports.
ports should be in autonegociation state or need to run the same speed/duplex/flowcotrol setup.
use "show lacp" to verify the operating status of trunk and ports.
The best thing to suggest is to use static LACP instead of dynamic at switch side. It is compatible with the dynamic trunking, only standby links will be ignored by your Linux device.
- trunk 'port' 'porttrunk' lacp
- interface 'port' lacp active
Take a look to the manual "Management and Configuration Guide", for a better understanding of the configuration and restrictions.
http://www.hp.com/rnd/support/manuals/8200zl.htm
http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-Mar10-K_14_52.pdf
Starting from page 325 and above
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-03-2012 08:45 PM
06-03-2012 08:45 PM
Re: LACP ProCurve - Linux
I've configured 802.3ad (i.e., mode=4) on both interfaces on both hosts...
[eric@sn2 ~]$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 500 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 6 Number of ports: 2 Actor Key: 17 Partner Key: 52 Partner Mac Address: 00:13:21:b7:1a:40 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: 00:17:08:7e:d2:b6 Aggregator ID: 6 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: 00:17:08:7e:d2:b7 Aggregator ID: 6 Slave queue ID: 0
...and reloaded all interfaces.
I've statically configured LACP trunks on each pair of interfaces...
ProCurve Switch 2824# show lacp LACP PORT LACP TRUNK PORT LACP LACP NUMB ENABLED GROUP STATUS PARTNER STATUS ---- ------- ------- ------- ------- ------- 1 Passive 1 Up No Success ... 17 Active Trk1 Up Yes Success 18 Active Trk1 Up Yes Success 19 Active Trk2 Up Yes Success 20 Active Trk2 Up Yes Success ... 24 Passive 24 Down No Success
ProCurve Switch 2824# show run
Running configuration:
; J4903A Configuration Editor; Created on release #I.10.77
hostname "ProCurve Switch 2824"
interface 17
no lacp
exit
interface 18
no lacp
exit
interface 19
no lacp
exit
interface 20
no lacp
exit
trunk 17-18 Trk1 LACP
trunk 19-20 Trk2 LACP
vlan 1
name "DEFAULT_VLAN"
untagged 1-16,21-24,Trk1-Trk2
ip address dhcp-bootp
exit
spanning-tree Trk1 priority 4
spanning-tree Trk2 priority 4
But the maximum throughput is still only ~1Gbps. What else do I need to do in order to achieve 2Gbps between each host?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-03-2012 09:28 PM
06-03-2012 09:28 PM
Re: LACP ProCurve - Linux
The Guide had this to say about traffic distribution:
Table 12-3. General Operating Rules for Port Trunks
Traffic Distribution: All of the switch trunk protocols use the SA/DA (Source Address/Destination Address) method of distributing traffic across the trunked links. See “Outbound Traffic Distribution Across Trunked Links” on page 12-26.
Outbound Traffic Distribution Across Trunked Links
All three trunk group options (LACP, Trunk, and FEC) use source-destination address pairs (SA/DA) for distributing outbound traffic over trunked links. SA/DA (source address/destination address) causes the switch to distribute outbound traffic to the links within the trunk group on the basis of source/destination address pairs. That is, the switch sends traffic from the same source address to the same destination address through the same trunked link, and sends traffic from the same source address to a different destination address through a different link, depending on the rotation of path assignments among the links in the trunk. Likewise, the switch distributes traffic for the same destination address but from different source addresses through different links. Because the amount of traffic coming from or going to various nodes in a network can vary widely, it is possible for one link in a trunk group to be fully utilized while others in the same trunk have unused bandwidth capacity even though the address assignments are evenly distributed across the links in a trunk. In actual networking environments, this is rarely a problem.
However, regardless of which mode I use to configure the bonding interface...
balance-rr (mode=0) active-backup (mode=1) balance-xor (mode=2) broadcast (mode=3) 802.3ad (mode=4) balance-tlb (mode=5) balance-alb (mode=6)
...the combined throughput never exceeds 1 Gbps. In fact, in balance-rr mode (mode=0) throughput averages ~600 Mbps. And in broadcast mode (mode=3) throughput averages ~160 Mbps!Is there some host configuration that needs tweeking here? e.g., xmit_hash_policy
REFERENCE
http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2012 07:17 PM - edited 06-04-2012 08:45 PM
06-04-2012 07:17 PM - edited 06-04-2012 08:45 PM
Re: LACP ProCurve - Linux
When using 802.3ad (mode=4) in a environment where...
- The distribution switch is using Dynamic LACP (by default),
- One system, acting as a server, has three links to the distribution switch, and
- Three systems, acting as clients, each have two links to the distribution switch.
- Throughput to each client is unequal but consistent with [it would appear] one client consuming one entire link; the other two clients sharing another link (4:5), and; the third link unused.
- Aggregate throughput is approximately two-thirds (i.e., 66%) of the sum of the three 1 Gbps links.
[ 6] local 192.168.1.20 port 5001 connected with 192.168.1.21 port 52104
[ 4] local 192.168.1.20 port 5001 connected with 192.168.1.23 port 44910
[ 5] local 192.168.1.20 port 5001 connected with 192.168.1.22 port 45614
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 605 MBytes 507 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 476 MBytes 399 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
And, when one link is removed from the server and the test envirmont consists of...
- The distribution switch is using Dynamic LACP (by default),
- One system, acting as a server, has two links to the distribution switch, and
- Three systems, acting as clients, each have two links to the distribution switch.
- Throughput to each client is evenly distributed and approximately equal among the clients (310 Mbps +/- 20 Mbps)
- Aggregate throughput is approximately one-half (i.e., 50%) of the sum of the two 1 Gbps links.
[ 7] local 192.168.1.20 port 5001 connected with 192.168.1.23 port 44905 [ 4] local 192.168.1.20 port 5001 connected with 192.168.1.22 port 45609 [ 5] local 192.168.1.20 port 5001 connected with 192.168.1.21 port 52099 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 362 MBytes 304 Mbits/sec [ ID] Interval Transfer Bandwidth [ 7] 0.0-10.0 sec 351 MBytes 294 Mbits/sec [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 378 MBytes 317 Mbits/sec
Why is this? What happens to the capacity of the server's second link?
Setup #1
[eric@sn1 ~]$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: adaptive load balancing Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 500 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e8 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e9 Slave queue ID: 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1f:29:c4:9f:ae Slave queue ID: 0 [eric@sn1 ~]$ for x in bond0 eth0 eth1 eth3 ; do /sbin/ifconfig $x | grep HWaddr ; done bond0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8
eth0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8
eth1 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8
eth3 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8
Setup #2
[eric@sn1 ~]$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 500 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 23 Number of ports: 2 Actor Key: 17 Partner Key: 65535 Partner Mac Address: 00:13:21:b7:1a:40 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e8 Aggregator ID: 23 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e9 Aggregator ID: 23 Slave queue ID: 0 [eric@sn1 ~]$ for x in bond0 eth0 eth1 eth3 ; do /sbin/ifconfig $x | grep HWaddr ; done bond0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8 eth0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8 eth1 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8 eth3 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:AE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2012 08:44 PM
06-04-2012 08:44 PM
Re: LACP ProCurve - Linux
Even though adaptive load balancing (mode=6) makes much better use of the server's links...
- Throughput to each client is equal and consistent.
- Aggregate throughput is approximately equal to (i.e., 94%) the sum of the three 1 Gbps links.
[ 6] local 192.168.1.20 port 5001 connected with 192.168.1.22 port 45616 [ 4] local 192.168.1.20 port 5001 connected with 192.168.1.21 port 52106 [ 5] local 192.168.1.20 port 5001 connected with 192.168.1.23 port 44912 [ ID] Interval Transfer Bandwidth [ 6] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
...it still isn't able to distribute traffic from the three clients evenly when the third link is removed from the server:
- Throughput to each client is unequal but consistent. It appears that one client is consuming one entire link and that the other two clients are sharing the remaining link (1:1).
- Aggregate throughput is approximately equal to (i.e., 92-94%) the sum of the two 1 Gbps links.
[ 7] local 192.168.1.20 port 5001 connected with 192.168.1.21 port 52109 [ 4] local 192.168.1.20 port 5001 connected with 192.168.1.23 port 44915 [ 5] local 192.168.1.20 port 5001 connected with 192.168.1.22 port 45619 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 540 MBytes 453 Mbits/sec [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 566 MBytes 475 Mbits/sec [ ID] Interval Transfer Bandwidth [ 7] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
Unfortunately, I think that this is as good as it's going to get with the HP Procurve using the SA/DA method of distributing traffic across the trunked links - Originally my goal was to establish multi-Gigabit links between individual two GlusterFS storage nodes in order to syncronize data between the nodes - but I'd welcome any suggestions for improving this configuration.
Setup #1
[eric@sn1 ~]$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: adaptive load balancing Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 500 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e8 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e9 Slave queue ID: 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1f:29:c4:9f:ae Slave queue ID: 0 [eric@sn1 ~]$ !for for x in bond0 eth0 eth1 eth3 ; do /sbin/ifconfig $x | grep HWaddr ; done bond0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8 eth0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8 eth1 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E9 eth3 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:AE
Setup #2
[eric@sn1 ~]$ cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: adaptive load balancing Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 500 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e8 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:78:56:2a:e9 Slave queue ID: 0 [eric@sn1 ~]$ for x in bond0 eth0 eth1 eth3 ; do /sbin/ifconfig $x | grep HWaddr ; done bond0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8 eth0 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E8 eth1 Link encap:Ethernet HWaddr 00:1B:78:56:2A:E9 eth3 Link encap:Ethernet HWaddr 00:1F:29:C4:9F:AE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2012 09:42 AM
06-06-2012 09:42 AM
Re: LACP ProCurve - Linux
Hi Eric
You are on the right track, but there is a bit of good news. If you use a multiple tcp connections or udp sessions. You might want to look at a 3500/6200/5400/8200/6600 on these models on newer software you have the ability to load balance on the L4 port numbers. This might be able to squeeze you a bit more.
You will still run into the limit that you will only have a maximum of 1GB per L4 connection source.mac,source.ip,source.port,destination.mac,destination.ip,destination.port. This does add more salt into the hashing though
HTH
Gerhard