Switches, Hubs, and Modems
cancel
Showing results for 
Search instead for 
Did you mean: 

Trunk Performance

Omer Asik
Occasional Advisor

Trunk Performance

Hi ,

We have one 5308 and 3448 switches. I have connected these switches with 2 1 Gbit/s cable and set trunk. I expect that my throughput will be 2 Gbit/s in normal operation. We have generated 2 Gbit/s network traffic between these switches but total throughput still 1 Gbit/sec.
Why I can't get 2 Gbit/s throughput ?
11 REPLIES
Les Ligetfalvy
Esteemed Contributor

Re: Trunk Performance

You need to elaborate on how you generate 2 gbps of traffic.
Kell van Daal
Respected Contributor

Re: Trunk Performance

Trunks divide traffic based on SA/DA (source address/destination address, both MAC-addresses) pairs.
If you only generate traffic with the SA/DA pair, you will only utilise on link in the trunk.

So if you want 2 Gbit/s, you will need at least two SA's or two DA's.
rick jones
Honored Contributor

Re: Trunk Performance

Generalizing a bit bonding/trunking/aggregation - a group of links by any other name... will spread the traffic of different "flows" across multiple links, but will not spread the traffic of a single flow across multiple links.

It is "plusungood" for an aggregate to spread the traffic of a single flow across multiple links. To do so raises the possibility of the traffic being reordered.

A "flow" can be defined in one of several ways depending on the specifics of the bonding/trunking/aggregation implementation. It could be defined as tuple of source and destination MAC (eg Ethernet) address. It could also be defined as the tuple of source and destination IP address. It can also be defined the same as for a TCP connection or UDP "connection" with the four-tuple of source and destination IP and source and destination port numbers.

While most well-engineered protocols (eg TCP) will still _work_ when their traffic is reordered they may not work _well_. In the specific case of TCP, if sufficient traffic is reordered it can fool TCP into thinking segments have been lost and it will start to generate spurrious retransmissions and clamp-down on its congestion window. The receiving TCP will also send many more ACK segments than it would have otherwise.

So as a general rule, bonding/trunking/aggregation will not spread the traffic of a single flow across multiple links. To maximize throughput across an aggregate you will need at least as many "flows" (as defined by the specific implementation) as there are links.

If you want a single "flow" to go faster you need to use a faster link instead of a link aggregate.
there is no rest for the wicked yet the virtuous have no pillows
Omer Asik
Occasional Advisor

Re: Trunk Performance

I am using two source and two detination address as in the diagram in the attachement. But still i got 1 Gbit/s throughput.
rick jones
Honored Contributor

Re: Trunk Performance

There is always the chance that the addresses on the systems involved happen to hash the same way and so hit the same link in the aggregate.

Also, being troply certain the aggregate is active/active rather than active/standby would be goodness.
there is no rest for the wicked yet the virtuous have no pillows
Donald L Wong
Occasional Visitor

Re: Trunk Performance

I'm also experiencing the same problem. In this case I have 2 3400cl-48G and one 2824.

From the "first" 3400cl's I have 4 ports trunked to the "second" 3400cl. From the "first" 3400cl I have 2 ports trunked to the 2824.

I have used 16 computers, 6 attached to the 2824, 6 attached to the "second" 3400cl, and 4 computers attached to the "first" 3400cl.

Anyhow, if I run iperf in several combinations (and what the port counters). Over the 2 port trunk if I use ports 21 and 22 from the 2824 and ports 9,10 on the 3400, it always uses the first port of the trunk. Never uses the second unless I fail the first. Now if I move the connection to port 22, to port 15 and trunk. It always uses port 21 (from the 2824) in one direction, and port 15 in the other direction. I have used all 16 computers in these tests (both the 21,22 port scenario and the 21,15 port scenario on the 2824).

If I do the same tests with the two 3400's. It only uses the 1st and 3rd ports of the trunk. If I mix the ports around, the same behaviour.

So, either I'm really really really unlucky with my SA/DA or the aggregation algorithm is really poor. I have tried LACP and static trunks in both set ups. If someone knows the algorithm, I'm happy to forward mac and IP addresses.

If anyone knows how to "hack" around this, that'd be great. And I have tried two versions of firmware also.

Donald
Matt Hobbs
Honored Contributor

Re: Trunk Performance

Donald,

I tried this with a very similar setup to yours.

3400-48G with a 4 port trunk going to a 3400-24G and a 2 port trunk going to a 2824.

To start with I had six clients, three on the 3400-48G, and three on the 2824. I then started sending traffic from the servers on the 3400 to those on the 2824 with TfGen - http://www.st.rim.or.jp/~yumo/pub/tfgen.html

It balanced this on the 2 port trunk quite well for me.

Next I moved the servers from the 2824 to the 3400-24G. Interestingly enough it would only utilise 2 links. If I failed one of the active links, all the traffic would failover to the other 2 that previously had very little traffic on them. I'm assuming this is because I only had 3 SA/DA pairs.

Next I moved all of the servers except one on to the 3400-24G. I then started five copies of TfGen on the one remaining server on the 3400-48G. It then balanced the traffic between the 4 links.

If you need better performance between the 3400's I think you should consider the 10GbE interface options.
rick jones
Honored Contributor

Re: Trunk Performance

My understanding is that the packet scheduling done in the switches is based entirely on MAC addresses. That being the case, traffic between a pair of systems, no matter how many connections, would likely just use the one link in the trunk in each direction, and if the MACs were "right" (wrong) just one link in the trunk overall.

To get balancing across the links in the trunk you need to have several MAC addresses involved.
there is no rest for the wicked yet the virtuous have no pillows
Matt Hobbs
Honored Contributor

Re: Trunk Performance

I just tried one last test, I put 3 servers on each of the 3400's, then on the servers connected to the 48G I ran 3 copies each of TfGen, going to each of the clients on the 24G. This created nine SA/DA pairs and subsequently the load was balanced across the 4 port trunk, it wasn't even though but this is to be expected.

Each conversation was worth about 13%. Two of the ports were carrying 40% traffic (3 converstaions), one was carrying 27% (2) and one was just at 13% (1) = 9 conversations.
Donald L Wong
Occasional Visitor

Re: Trunk Performance

Thanks for the replies Rick and Matt. Rick, I have at least 21 SA/DA pairs, so I don't think that's the case.

Matt, thank you so much for the work you did, it was greatly appreciated. Once I figure out how to assign points, I'll run points your way.

Minor request Matt if you have the time (I feel bad for asking) Can you tell me what firmware you are running on the 3400cl's? I want to match your environment as closely as possible as it appears you've had some success. Also, if it's not much work, tell me what ports you are using and how they are paired up (between the 3400cl's). If it is work don't worry about it.

More info on the traffic pattern. I rotated the ports around some more between the two 3400's (as it did change things with the 2824) and it did change the traffic pattern this time, using 3 ports but....

What I did notice is that watching traffic between the two 3400's is that it always used 2 specific links for traffic in one direction, and 2 specific links for traffic in the other direction (and in this scenario, it didn't use the 4th trunk at all). And I did use the majority of the nodes in the previous tests, so many SA/DA pairs.

Donald
Matt Hobbs
Honored Contributor

Re: Trunk Performance

Hi Donald,

For my setup the 48G was using ports 43-44 trunked to the 2824 on ports 23-24.

The 48G trunked to the 24G on ports 45-48 and 21-24.

Using the latest I.08.87 and M.08.86 firmware.

I've attached a screenshot too of what the traffic distribution looked like in my previous post.

For me, when starting the traffic generator on the first server on the 48G, which is sending to 3 servers on the 24G, it builds 3 of the links up equally, so ports 46,47,48 each have 13% traffic. I then start the second server doing exactly the same thing, once again it builds up the same 3 links and now they're each at 27%. I start up the final server and that's when port 45 is used but only with 13%, the rest of the traffic adds on to ports 46 and 48.

By the way since you did not start this thread you can't allocate points, but thanks anyway.