ProCurve / ProVision-Based
cancel
Showing results for 
Search instead for 
Did you mean: 

Strange Trunk behaviour

KGDI
Occasional Collector

Strange Trunk behaviour

Hi,

 

I am doing a lab scenario with two 2610-24 switches before implementing in production. The test case is to create a trunk between SW-01 and SW-02 using three ports.

 

I have upgraded both switches to the latest firmware i could find (#R.11.63), and the configuration has been cleared on both switches.

 

The configuration is simple and identical on both switches:

 

SW-01:

 

; J9085A Configuration Editor; Created on release #R.11.63

hostname "SW-01"
trunk 19,21,23 Trk1 LACP
snmp-server community "public" Unrestricted
vlan 1
   name "DEFAULT_VLAN"
   untagged 1-18,20,22,24-28,Trk1
   ip address 10.11.133.40 255.255.255.0
   ip address 10.11.181.40 255.255.255.0
   exit
spanning-tree Trk1 priority 4
password manager
password operator

 

 

SW-02:

 

; J9085A Configuration Editor; Created on release #R.11.63

hostname "SW-02"
trunk 19,21,23 Trk1 LACP
snmp-server community "public" Unrestricted
vlan 1
   name "DEFAULT_VLAN"
   untagged 1-18,20,22,24-28,Trk1
   ip address 10.11.133.41 255.255.255.0
   ip address 10.11.181.41 255.255.255.0
   exit
spanning-tree Trk1 priority 4
password manager
password operator

 

 

i have two clients with 1gb nic connected to gb ports in SW01, and two clients with 1 gb nic connected to gb ports in SW02.

 

port 19,21 and 23 are 10/100 ports.

 

I start iperf server on each of the clients in SW01, and perform a connection from each of the clients in SW02.

 

The result is about 40-50 Mbit/s each, with a total of ~94Mbit/s.

 

I thought this was strange and i struggeled with different configs trying to get the speed up (Active/passive, dynamic trunks etc etc...)

 

I like the console, but after a while i used the web interface, and i could clearly see in the graphs that all traffic was passing through port 19. The other ports was without traffic. I then decided to plug out port 19, to see if the traffic was redirected to port 21 or 23 in the trunk. To my surprise the speed boosted as i would have expected in the first place, and i now got 94-95 Mbit/s each, with a total of ~190Mbit/s.

 

I thought maybe there was something wrong with port 19, so i plugged it back in, and plugged out 21 instead. Same result: 94-95 Mbit/s each, with a total of ~190Mbit/s.

 

I tried to plug out 23, same result: 94-95 Mbit/s each, with a total of ~190Mbit/s

 

So,

with two cables in the trunk (no matter wich ones) i get 94-95 Mbit/s each, with a total of ~190Mbit/s

with all three cables in trunk  i get 40-50 Mbit/s each, with a total of ~94 Mbit/s.

 

Any ideas on what could be causing this behaviour?

 

/K

 

 

 

2 REPLIES
FranoM
Advisor

Re: Strange Trunk behaviour

Hi,
The trafic repartition on each link of a trunk (link aggregation) between HP switches is statistical and fixed.

 

I'll try to explain :

 

1°) if you aggregate 4 * 100 Mb, you will not get 400 Mb throughput. Just 4 times 100 Mb.

2°) As I can remember, the switch compute for each peer the source-mac/destination-mac and assign statistically one of the link. This peer source/destination will always use the same physical link.

3°) When a physical link is added or removed, the statistical repartition is instantly recalculated.
With a huge number of devices on both side of the trunk, you'll probably have a good repartition between each physical links.

 

In your case, the statistic repartition put the 2 couples of devices on the same link. So in this specific case, it's worse with 3 cables than using 2.

Try with 3 cables and other devices (differents mac), you probably obtain other results.

 

Hope it's help.

 

Regards,

 

François

KGDI
Occasional Collector

Re: Strange Trunk behaviour

Hi François,

 

Thank you for your feedback.

 

 

1) Yes i know, i was not expecting 400 mbit/s, i expected the two hosts on SW02 to be able to perform at ~100mbit each.

 

2) It makes sense

 

3) It makes sense

 

 

In regards of the things you point out in the end, i have to say that it seems to me that it would make more sense if there was some sort of logic that would sense if only 1 of the ports in the trunk was in use. The reason for configuring the trunk in the first place is performance, and it does not make sense to me that the trunking would put 2 sessions on one port, and 0 sessions on the other two. That is just bad logic in my book.

 

I understand that it probably would be more even load between the ports with more hosts, but it sounds strange that more hosts are needed in the scenario for the performance to be optimal no matter what number of hosts involved. Again, this sounds like a design flaw in my book.

 

/K