- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- rsync latency testing
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2009 01:51 PM
11-12-2009 01:51 PM
rsync latency testing
I know the --bwlimit parameter which limits bandwidth - it does not add latency between packets.
I tried to use tc thusly:
/sbin/tc qdisc add dev eth2 root netem delay 200ms
I know the command worked:
/sbin/tc qdisc
qdisc netem 8007: dev eth2 limit 1000 delay 200.0ms
I know I am using eth2 because this is the only external interface.
However, rsync still runs at the same rate with the same timings.
I read (http://www.docum.org/docum.org/faq/cache/49.html) using rsync over ssh, the packets are marked with TOS minimize-delay.
So, I set up a netcat proxy (putting:
ProxyCommand nc %h %p
into ~/.ssh/config) because I read (http://savannah.gnu.org/maintenance/SshAccess) OpenSSH sets the TOS (type of service) field after the user has submitted the password and that nc doesn't set the TOS field.
I still get the same timings from rsync.
Would someone please tell me how to induce latency into an rsync stream to simulate a WAN situation?
Thanks,
Rich
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2009 02:07 PM
11-12-2009 02:07 PM
Re: rsync latency testing
http://wanem.sourceforge.net/
Have you tried with other transfer program like ftp if the delay is applied?
You could also try:
iptables -t mangle -A POSTROUTING -p tcp -m tcp --sport 22 -j TOS --set-tos Normal-Service
iptables -t mangle -A POSTROUTING -p tcp -m tcp --dport 22 -j TOS --set-tos Normal-Service
Use a network sniffer like tcpdump or wireshark to verify the results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2009 03:01 PM
11-12-2009 03:01 PM
Re: rsync latency testing
Thanks for the pointers.
I am in no way any type of network engineer - I'm a DBA trying to simulate recovery over a high latency network without such a network. However, I like to learn new things :)
An interactive FTP session doesn't seem observe the delay introduced by the tc as set above (200ms). However, a 21283840 byte file transfer (non-bandwidth limited) requires 9.1 seconds.
Using a delay of 2000ms, I can see the delay in my terminal session and the FTP session. Also, the file transfer with FTP at 2000ms requires 90 seconds. This is consistent.
So, it appears that the tc incantation is working - just not for rsync (or maybe ssh).
The iptables mangle table incantation didn't work for the rsync operation.
Testing using scp doesn't seem to abide by the tc rules nor the iptables chain either...I see no difference with or without the tc rules with or without the iptables.
Looking at WANem.
I can't use wireshark nor any other type of sniffer here - our (single) network engineer is out for the week :(.
Any other suggestions are welcome!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2009 05:32 PM
11-12-2009 05:32 PM
Re: rsync latency testing
To verify you have added the latency you should run something that explicitly tests latency such as ping or a netperf TCP_RR test (alas netperf.org is down at the moment, but depending on which Linux youa re running you may be able to get a precompiled package)
One, of the many, limits to TCP peformance is:
Throughput <= Weff/RTT
Where Weff (the "effective window") will be the minimum of:
A) the receiving TCP's advertised window (SO_RCVBUF, with the Linux autotuning complication)
B) the quantity of data the sending TCP can reference while waiting for ACKs from the remote (ie SO_SNDBUF with the same complication)
C) the sending TCP's computed congestion window
D) the quantity of data the application is willing to send at one time before stopping to wait for a response from the remote.
So, if Weff is rather large, and/or your simulated bwlimit (ie Tput) is sufficiently small, 200 ms of delay may not "matter" for something doing bulk transfer.
FWIW, the TOS field in the IP header will not (it had better not) be examined by the netem module. TOS settings are only examined at routers, if at all, and as such were a red-herring in your situation.
"I know I am using eth2 because that is the only external interface." - can you expand on that a bit?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-18-2009 11:01 AM
11-18-2009 11:01 AM
Re: rsync latency testing
Unfortunately (or fortunately as the case may be), I have stopped working on this as our network engineer is now back and we also have the WAN link.
BTW, the latency on the WAN like (from ping) is:
Success rate is 99 percent (4998/5000), round-trip min/avg/max = 1/3/42 ms
Much better than I expected - after the VPN build, we have (1200 pings) 3-5ms average.
I have seen some fairly bad latency links in the past and was concerned about it being an issue for recovery. This concern turned out to be nothing in this case.
Thanks to all who helped - I learned a lot :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-18-2009 11:05 AM
11-18-2009 11:05 AM