Aruba & ProVision-based
1752705 Members
6074 Online
108789 Solutions
New Discussion юеВ

Re: Surprising UDP packet loss during iperf3 speed tests

 
Jacques Nepveu
Occasional Advisor

Surprising UDP packet loss during iperf3 speed tests

I am currently testing bandwidth throughput with iperf3 on a E3800 switch.  I want to see if the switch can do 80Mbps to an interface limited at 100Mb (future uplink to another site).  Testing with TCP gives me the proper speed with no packet loss however testing with UDP produces over 33% packet loss.

My client is on a 1Gb interface (negotiated) while my server is on a 100Mb interface (forced on both the switch and server). If I increase the per port buffer size by reducing the number of QOS queues in half (4) then the packet loss goes down to about 12%.

My question is why does the buffer overflow when sending at 80Mbps from a 1Gb interface to a 100Mb interface. Shouldn't the switch be able to handle it with little or no loss?

I would like to point out that there is no packet loss if the client interface is set to 100Mb to match the server interface.  I am also the only one using the said switch.

I am using the following commands:
iperf3 -s
iperf3 -c<ServerIP> -u -l1400 -b80m -t20

Thank you in advance,

Jacques Nepveu

6 REPLIES 6
Mike_ES
Valued Contributor

Re: Surprising UDP packet loss during iperf3 speed tests

Hello,

"Flow Control works best when the network can signal the source server to pause and stop overloading the network when congestion occurs.

This makes most sense with non-TCP and non-UDP based application that can not tolerate ANY packet loss. This is typically in environments where all traffic is constrained to a local LAN.

For most applications, it is best to let TCP handle the window sizing and retransmissions. And most UDP applications can handle typical packet loss from well engineered networks just fine.

If you really know what you are doing...you could try to use flow control on the switch port with the lowest bandwidth, as long as the upstream switch has more than sufficient buffer capacity. But this can be tricky to engineer properly and more often than not creates more problems than helps.

For most situations, you will be better off to re-engineer your network and deploy switches with sufficient buffer capacity where you detect bottle-necks in your network.тАЭ

Jacques Nepveu
Occasional Advisor

Re: Surprising UDP packet loss during iperf3 speed tests

Hi,

I do not think using flow control with QOS is a recommended configuration.

Jacques Nepveu

16again
Respected Contributor

Re: Surprising UDP packet loss during iperf3 speed tests

I suspect your UDP 80Mb/s stream isn't nice shaped but bursty:
For instance, a couple of milli second 1GB , then 10..20 milli second 0bp/s
This overflows switch buffer resulting in your packet loss.
You could try with a better shaper in between.

Also, the port where you're doing bandwidth limiting should be shaping , not policing

 

Mike_ES
Valued Contributor

Re: Surprising UDP packet loss during iperf3 speed tests


@16again wrote:

I suspect your UDP 80Mb/s stream isn't nice shaped but bursty:
For instance, a couple of milli second 1GB , then 10..20 milli second 0bp/s
This overflows switch buffer resulting in your packet loss.
You could try with a better shaper in between.

Also, the port where you're doing bandwidth limiting should be shaping , not policing

 


And my suggestion to test full UDP throughput: leave default FIFO mechanism on the switches interfaces (100Mb/1Gb) and set AUTO<--->AUTO Speed/Duplex also for these interfaces.

Michal

Jacques Nepveu
Occasional Advisor

Re: Surprising UDP packet loss during iperf3 speed tests

Hi,

I suppose it could be the way that iperf3 generates the UDP stream that is causing the problem. I will try another program capable of generating UDP streams.

Thanks,

Jacques Nepveu

Jacques Nepveu
Occasional Advisor

Re: Surprising UDP packet loss during iperf3 speed tests

Hi Mike_ES,

Well there is no problem when the ports are set at the same speed, whether forced or negociated, 100FDx or 1000FDx. My problem is that the future uplink port has to be forced 100FDx because that is what the ISP equipment will force me to use.

Jacques Nepveu