Operating System - HP-UX
1826006 Members
3077 Online
109690 Solutions
New Discussion

copying files to a remote server

 
Patti Johnson
Respected Contributor

copying files to a remote server

Hi,

We currently use our test server as a DR server for production. Both servers are located in the same data center ( for now) and the nightly backup files are copied from the production server to the test server. The total amount of data copied is 7G. The production server has an NFS mount of a filesystem on test and the transfer is done using the a cp -R command.
This all works fine and take about 12 min to copy the data.

We are planning on moving our test server to another location ( a few hundred miles away) and will still need to copy the data every night. In doing tests to another server located at the new location I can transfer about 1G an hour.

Does anyone have any suggestions on how to improve the copy rate.
Would tar or cpio work better that cp?
Are there any options that could help?

The connection is over a wide area network.

I've tried submitting multiple copy jobs at once from the source system, but the rate actually appears to be slower.

Any thoughts/ suggestions would be appreciated.

Thanks,
Patti
5 REPLIES 5
Sundar_7
Honored Contributor

Re: copying files to a remote server

Patti,

cp and other commands you have mentioned will copy the files from source to destination whether or not the source have changed since the last copy.

Consider implementing rdist or rsync, using which you can keep two filesystems/dirs in sync between the systems.

But relying on the basic OS utilities is not how I would architect a DR solution.

Sundar.
Learn What to do ,How to do and more importantly When to do ?
James R. Ferguson
Acclaimed Contributor

Re: copying files to a remote server

Hi Patti:

You might try creating a compressed tarball of the files that you want to transfer; copy the compressed archive, uncompress and un-tar it.

You could easily script the whole process so that with 'remsh' you only need to launch the script on the sending server.

Regards!

...JRF...
Patti Johnson
Respected Contributor

Re: copying files to a remote server

thanks for the response.
The files that I'm copying are already compressed. (using compress)
Is there any difference between using a cp command - or a tar with the file being created on the NFS mounted directory?

TwoProc
Honored Contributor

Re: copying files to a remote server

If you use rsync it has the ability to use an ssh tunnel, which you can use keys for security, and it has the ability to use compression on the flag (-z flag) which should be useful in a wan environment.

Unless ALL or MOST of the files that you are shipping each day have changed, you'll save a lot of time with rsync instead.

check it out at...

http://hpux.cs.utah.edu/hppd/hpux/Networking/Admin/rsync-2.6.4/

Of course, you can use tar to go between, and even compress the data in the pipe stream. If you decide that you'd really like *that* answer then:

Set up keyed ssh (rsa or dsa) on both servers.

Go to the source directory (assuming /productiondir).

cd /productiondir

tar cvf - . | compress | ssh "cd /testdirectory; uncompress | tar xvf - "

If you've got "gzip" you can substitute that instead and you can vary the level of compression to fine tune it for the max throughput, though on a wan, I'd guess that the max compression might give the max throughput.

You'll probably see better performance with gzip than with the old standard compress program.






We are the people our parents warned us about --Jimmy Buffett
TwoProc
Honored Contributor

Re: copying files to a remote server

Oh, I just read that you said that the files you're sending are already compressed.

So, once ssh is set up,
it's just

cd /productiondir

tar cvf - . | ssh " cd /testdirectory; tar xvf - "

We are the people our parents warned us about --Jimmy Buffett