Operating System - Linux
1830237 Members
8903 Online
109999 Solutions
New Discussion

Re: rsync latency testing

 
Richa03
New Member

rsync latency testing

I would like to see the effect of different network latencies on rsync and scp.

I know the --bwlimit parameter which limits bandwidth - it does not add latency between packets.

I tried to use tc thusly:
/sbin/tc qdisc add dev eth2 root netem delay 200ms

I know the command worked:
/sbin/tc qdisc
qdisc netem 8007: dev eth2 limit 1000 delay 200.0ms

I know I am using eth2 because this is the only external interface.

However, rsync still runs at the same rate with the same timings.

I read (http://www.docum.org/docum.org/faq/cache/49.html) using rsync over ssh, the packets are marked with TOS minimize-delay.

So, I set up a netcat proxy (putting:
ProxyCommand nc %h %p
into ~/.ssh/config) because I read (http://savannah.gnu.org/maintenance/SshAccess) OpenSSH sets the TOS (type of service) field after the user has submitted the password and that nc doesn't set the TOS field.

I still get the same timings from rsync.

Would someone please tell me how to induce latency into an rsync stream to simulate a WAN situation?

Thanks,
Rich
5 REPLIES 5
Ivan Ferreira
Honored Contributor

Re: rsync latency testing

You could try:

http://wanem.sourceforge.net/

Have you tried with other transfer program like ftp if the delay is applied?

You could also try:

iptables -t mangle -A POSTROUTING -p tcp -m tcp --sport 22 -j TOS --set-tos Normal-Service
iptables -t mangle -A POSTROUTING -p tcp -m tcp --dport 22 -j TOS --set-tos Normal-Service

Use a network sniffer like tcpdump or wireshark to verify the results.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Richa03
New Member

Re: rsync latency testing

Hi Ivan,
Thanks for the pointers.

I am in no way any type of network engineer - I'm a DBA trying to simulate recovery over a high latency network without such a network. However, I like to learn new things :)

An interactive FTP session doesn't seem observe the delay introduced by the tc as set above (200ms). However, a 21283840 byte file transfer (non-bandwidth limited) requires 9.1 seconds.

Using a delay of 2000ms, I can see the delay in my terminal session and the FTP session. Also, the file transfer with FTP at 2000ms requires 90 seconds. This is consistent.

So, it appears that the tc incantation is working - just not for rsync (or maybe ssh).

The iptables mangle table incantation didn't work for the rsync operation.

Testing using scp doesn't seem to abide by the tc rules nor the iptables chain either...I see no difference with or without the tc rules with or without the iptables.

Looking at WANem.

I can't use wireshark nor any other type of sniffer here - our (single) network engineer is out for the week :(.

Any other suggestions are welcome!
rick jones
Honored Contributor

Re: rsync latency testing

The latency you introduced may not have been sufficient to go beyond the window used by rsync, especially since Linux will by default, autotune the socket buffers and thus the TCP window sizes.

To verify you have added the latency you should run something that explicitly tests latency such as ping or a netperf TCP_RR test (alas netperf.org is down at the moment, but depending on which Linux youa re running you may be able to get a precompiled package)

One, of the many, limits to TCP peformance is:

Throughput <= Weff/RTT

Where Weff (the "effective window") will be the minimum of:

A) the receiving TCP's advertised window (SO_RCVBUF, with the Linux autotuning complication)
B) the quantity of data the sending TCP can reference while waiting for ACKs from the remote (ie SO_SNDBUF with the same complication)
C) the sending TCP's computed congestion window
D) the quantity of data the application is willing to send at one time before stopping to wait for a response from the remote.

So, if Weff is rather large, and/or your simulated bwlimit (ie Tput) is sufficiently small, 200 ms of delay may not "matter" for something doing bulk transfer.

FWIW, the TOS field in the IP header will not (it had better not) be examined by the netem module. TOS settings are only examined at routers, if at all, and as such were a red-herring in your situation.

"I know I am using eth2 because that is the only external interface." - can you expand on that a bit?
there is no rest for the wicked yet the virtuous have no pillows
Richa03
New Member

Re: rsync latency testing

Thanks, Rick - very informative.

Unfortunately (or fortunately as the case may be), I have stopped working on this as our network engineer is now back and we also have the WAN link.

BTW, the latency on the WAN like (from ping) is:
Success rate is 99 percent (4998/5000), round-trip min/avg/max = 1/3/42 ms

Much better than I expected - after the VPN build, we have (1200 pings) 3-5ms average.

I have seen some fairly bad latency links in the past and was concerned about it being an issue for recovery. This concern turned out to be nothing in this case.

Thanks to all who helped - I learned a lot :)
rick jones
Honored Contributor

Re: rsync latency testing

5ms isn't all that "wide" a WAN - what sort of geographical distance are you talking about?
there is no rest for the wicked yet the virtuous have no pillows