1834142 Members
2246 Online
110064 Solutions
New Discussion

Re: rcp command

 
SOLVED
Go to solution
John Jimenez
Super Advisor

rcp command

I am upgrading a k380 (10.20) to an RP7410 (11i)(k380 has the slower 100 base card) . I need to copy 25 gig from "server1" /data to "server2 /data. this directory has lots of subdirectories and files. I have never used this command Do you think that the "rcp" command is the way to go? or is there a better way to get the data accross the network?
Hustle Makes things happen
20 REPLIES 20
Paul Sperry
Honored Contributor

Re: rcp command

on server1

cd /data
find . -print | cpio -ocxa | remsh ???cd /data;cpio -icxvdmu???
A. Clay Stephenson
Acclaimed Contributor

Re: rcp command

A major drawback to the rcp command (even with -p) is that file ownership metadata is not preserved. If this were me, I would choose tar, fbackup, or cpio that outputs to stdout ( a pipe) that does a remsh and untar's , uncpio's, or frecovers reading stdin on the remote server. If any of your files exceed 2GB then fbackup/frecover is your only option. I am naking the assumption that this is HP-UX to HP-UX as fbackup is an HP only command.

If it ain't broke, I can fix that.
Sridhar Bhaskarla
Honored Contributor

Re: rcp command

Hi,

I would suggest you install ssh on both the boxes. Set up public/private key authentication and then use "scp". It will be much faster and secure as it uses compressesion internally.

You can get Openssh from

http://hpux.cs.utah.edu/hppd/hpux/Networking/Admin/openssh-3.6.1p1/

Use the following documentation to setup no-password scp.

http://bumblebee.lcs.mit.edu/ssh2/

Once it is setup, you can use

scp -rp /data your_account@server2:/data

if your_account has write access to /data filesystem.

-Sri



You may be disappointed if you fail, but you are doomed if you don't try
Pete Randall
Outstanding Contributor

Re: rcp command

Absolutely the way to go is with a backup approach. Any sort of rcp, find piped to cpio, etc approach is fraught with problems. I always used to stick with a find to cpio over nfs until I discovered that I wasn't getting everything I thought I was. I wasted half a Saturday having to redo things with backup/restore.


Pete

Pete
John Jimenez
Super Advisor

Re: rcp command

Wow, that sounds pretty scary. The new system only came with an Ultrium cartridges. The K380 has a DLT, a internal DDS3, and a external DDS3 drive. I am planning on moving the external DDS3 over to the RP7410, but I cannot bring down the k380 until next Thursday. We will not do the real conversion for about a month, so backup is the only way to go during that time. But in an effort to begin testng this week, I must still decide on which network method is best.
Hustle Makes things happen
Bill Hassell
Honored Contributor
Solution

Re: rcp command

If the data is important, then create a script that will inventory (ie, find...) the source and the destination so you know that all the files and directories arrived OK. A shorthand method is to count the number of files (find /dir -type f|wc) and directories (find /dir -type d|wc). For true accuracy, use sum or cksum on files.

NOTE: du and bdf will almost never match the original. While the resultant files and directories will be just fine, the occupied space may be larger or smaller then the original. Here are the scenarios:

Smaller: one or several directories grew with hundreds or thousands of files which were subsequently deleted from the directories. The directory (it is also a file) grew in size for the entries but is never reduced when the files are removed. Copying a diectory with lots of empty slots will produce a new directory without the empty slots and thus the destination takes less space. But both directories behave the same.

Larger: By far the most common scenario especially with database systems. A sparse file is a file where there are only a few occupied records but lots of undefined records. When an undefined record is encountered during a serial read, the result is a string of zeros. So when a sparse file is copied, all the zeros will be written and the copied file may many times larger than the original file. Yet, both files behave exactly the same way with applications.


Bill Hassell, sysadmin
John Jimenez
Super Advisor

Re: rcp command

I also have a couple of files greater then 2 gig...
Hustle Makes things happen
John Jimenez
Super Advisor

Re: rcp command

The files are very important. But I have time to test. The inventory script way sounds like a good way to verify that everything was copied over. So when we start debugging we can at least know that any problems will not be because of the copy. But now I must still decide on which method to use copy over the network? One this is that everone is in agreement that the "rcp command" way is not the way to go.
Hustle Makes things happen
Bill Douglass
Esteemed Contributor

Re: rcp command

Install GNU tar, which supports largefiles. I use OpenSSH here, so I Can copy between systems using a pipe like this:

tar cvf - /dirtory/path/to copy | ssh newhost tar -C /newdirpath -xf -

This will tar up the old directory tree, and send to stdout, which ssh wil pipe ot new host. tar on newhost will change to newdirpath and extract the archive from stdin.

GNU tar is available from:

http://hpux.cs.utah.edu/hppd/hpux/Gnu/tar-1.13.25/


You also need:

gettext

http://hpux.cs.utah.edu/hppd/hpux/Gnu/gettext-0.12/

and libiconv

http://hpux.cs.utah.edu/hppd/hpux/Development/Libraries/libiconv-1.9/
Massimo Bianchi
Honored Contributor

Re: rcp command

Hi,
another not very fast option could be to installa sw like HP DP.


You can download it for free and have a 60 days try and buy optin.


This sw does the invetory for you and supports large files.

Other way fbackup/frecover with the index option might be the less-cost solution.

There are other threads in this section that tell you more information about the index option and how to use it.

HTH,
Massimo







Brian K. Arnholt
Frequent Advisor

Re: rcp command

I always am trying to find a good way to do what you are doing and have tried many different ways...here is a different approach you can consider.

1. create an nfs mount of server1:/data on server2.

2. Use the attached script to copy server1:/data to server2:/data.

So if on server2, you have the local /data directory and you want to copy the /data_server1 directory (which you mounted via nfs), you simply would use the attached script:

cpdir /data_sever1 /data
(note, the script does a recursive diff on the files, which I commented out, but it could also be easily modified to do a cksum listing as well.
Some see things as they are and ask why, I dream of things that never were and ask why not?
Brian K. Arnholt
Frequent Advisor

Re: rcp command

I always am trying to find a good way to do what you are doing and have tried many different ways...here is a different approach you can consider.

1. create an nfs mount of server1:/data on server2.

2. Use the attached script to copy server1:/data to server2:/data.

So if on server2, you have the local /data directory and you want to copy the /data_server1 directory (which you mounted via nfs), you simply would use the attached script:

cpdir /data_sever1 /data
(note, the script does a recursive diff on the files, which I commented out, but it could also be easily modified to do a cksum listing as well.

Good luck!
Brian
Some see things as they are and ask why, I dream of things that never were and ask why not?
Chris Vail
Honored Contributor

Re: rcp command

I'm with Brian--try it across an NFS mount. 25GB isn't a lot of data, and even across a 10base100 connection won't take more than a few hours. I'd use tar or GNU tar to move 'em with.

It isn't very secure. If you want security, insall secure shell and use scp -p to move everything. This will be fairly fast once the initial connection is made.

I've attached my usual document on how to implement secure shell.


Chris
John Jimenez
Super Advisor

Re: rcp command

Thanks it looks like mounting it via NFS was the way to go. it has been running for 1 hour and looks like it is already 1/2 done. I am getting some permission and files over 2 gig errors? I guess I may need to tar these file? here is a sample of each error.

cp: bad copy to /diamond/602a_NETNEW/602_NEW_PROGS.tar: read: Permission denied
cp: cannot access data/JCCONTM0.DAT: Value too large to be stored in data type
cp: cannot open data/eligprov/0303/elg/HP0303.ELG: Permission denied
Hustle Makes things happen
A. Clay Stephenson
Acclaimed Contributor

Re: rcp command

You are not going to able to tar those files either - tar (and cpio) has a 2GB limit. Tar can be patched to allow 8GB files but if it were me, I would use fbackup/frecover OR download/install the Gnu Version of tar which will handle large files.


http://gatekeep.cs.utah.edu/hppd/hpux/Gnu/tar-1.13.25/
If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: rcp command

On a clean, switched network, the NFS method should be OK for files up to 2Gb using the cp command. NFS version 3 is required to support NFS with files larger than 2Gb. NFS version 3 is available on HP-UX 11 and higher.

Now you mentioned tar and NFS at the same time. NFS simply mounts the remote filesystem and you would use standard file commands like cp to transfer the data. rcp (or tar or cpio and remsh or ssh, etc) as mentioned before do not use the NFS system but simply create a data pipe between the two systems. NFS would not be used in that case. For the best performance using a network pipe (and avoid the 2gb limits), use fbackup piped into remsh+frecover on the remote side. The diff and cksum commands could then be used through the NFS connection to compare the resultant copies.


Bill Hassell, sysadmin
John Jimenez
Super Advisor

Re: rcp command

So another words I can fbackup from one system to the other system? Because right now I do not have the same media on the to systems to back up on tape

I guess another problem with cp command is a quantity limitation? I also got this error on hundreds of files?

cp: cannot create /diamond/utility/FXELGM20.PGM: Too many open files
cp: cannot create /diamond/utility/runHPR60.bat: Too many open files
cp: cannot create /diamond/utility/dcrscript.ksh: Too many open files
Hustle Makes things happen
A. Clay Stephenson
Acclaimed Contributor

Re: rcp command

As I indicated in my first posting, you can use backup/resore programs but not use tape media. The idea is that on one end of a pipe, you have a backup program (e.g. fbackup) running that connects its standout output to a restore program (e.g. frecover) running on another system that reads std input.

The "glue" that connects the systems is the remsh command.

NFS (because of the overhead) was probably the slowest possible answer in addition to having possible 2GB limitations depending upon the OS and patch levels.
If it ain't broke, I can fix that.
John Jimenez
Super Advisor

Re: rcp command

I am now having problems with the frecover part of this command. Any ideas on what is wrong with that portion of the command?

/etc/fbackup -0v -i /tmp/test -f - | compress | remsh 10.0.0.1 -l root "cd /tmp/lund ; uncompress | cat - | \ /etc/frecover -Xrf -"
Hustle Makes things happen
John Jimenez
Super Advisor

Re: rcp command

I had a couple of slashes that did not need to be in there. Now it seems to be working
Hustle Makes things happen