1833744 Members
3004 Online
110063 Solutions
New Discussion

Re: scripting help

 
SOLVED
Go to solution
Madhu Sudhan_1
Respected Contributor

scripting help

Hello Frineds,
I have a scripting requirement where I have to clone Oracle applications from one machine to the other. We use rcp to copy files from one machine to the other. Due to network reasons if rcp is hung or exits without copying all the files we will have to re-initiate rcp and our down-time window will not be sufficient in this case. So what Iam looking at is some kind of remote copy program which allows me to copy and resume aborted copy from the point of break. Any ideas greatly appreciated.

Thanks,
Alle
Think Positive
9 REPLIES 9
Deepak Extross
Honored Contributor

Re: scripting help

Why not use FTP instead? Create a tarball of all the files you need to send across, and then initiate the FTP session from the DESTINATION machine (use "get "). In case the FTP terminates prematurely, you can use FTP's "reget ". This will resume the ftp from where it broke off (only the remaining part of the file will be retrieved).
Robin Wakefield
Honored Contributor
Solution

Re: scripting help

Hi,

Have a look at rsync, as it cuts down on the amount of data to transfer. I don't believe it has a restart option, but because you're transferring less, it may help in your situation.

Rgds, Robin
Steve Steel
Honored Contributor

Re: scripting help

Hi


I think the ftp is a good option but I would use shar instead of tar.

Compress the output file

ftp it

uncompress it

sh it

See man shar

Remember you need to do the ftp on the destination machine.


reget remote-file [local-file]
reget acts like get, except that if local-file exists and is
smaller than remote-file, local-file is presumed to be a
partially transferred copy of remote-file and the transfer is
continued from the apparent point of failure. This command is
useful when transferring very large files over networks that tend
to drop connections.

Thus before each new transfer remove or better nullify the destination file


Steve Steel
If you want truly to understand something, try to change it. (Kurt Lewin)
Paula J Frazer-Campbell
Honored Contributor

Re: scripting help

Hi

If you have say 10 Gig of data to move then break it down into say 20 1/2 gig files, so if the rcp/ftp fails then on the recipient server it is a simple matter to check how many files got there and recopy the ones missed.

It would be very simple to write a script to check the status of this and automatically either gets of put the missing files.

I would cron the check script and also get it to inform on failures of the copy.


HTH

Paula
If you can spell SysAdmin then you is one - anon
Carlos Fernandez Riera
Honored Contributor

Re: scripting help

Madhu Sudhan_1
Respected Contributor

Re: scripting help

Thanks for the replies. In Oracle Applications home there are several thousand files. Total size yielding to approximately 30gb. I don not want to create a tar ball compress and do an ftp. As this involves time for tar ball creation + compression + ftp + uncompress + tar extract.
The time can even be more if tar extract fails with unexpected EOF occured. (I had very bad experience with error ).

Iam planning to write a scrit, but this script depends on the remote copy program which I use. I wall have total control if I generate a listing from
# find /apphome -name -print > list.out
and copy list.out to the remote machine and initiate copy and do checksum before and after copy.
But, I was thinking of performance implications if initiate rcp for file which present in list.out individually.

Please advice.

Thanks,
Allen
Think Positive
Bill Hassell
Honored Contributor

Re: scripting help

ftp is quite a bit faster than rcp but it does not handle subdirectories so your script would have to manage that. A tarball bigger than 2Gb is not possible due to industry standards. Since this data is probably very important, I would include in your script a check for the validity of the copy (file name, permissions, ownership). NOTE: the file may be the same size or it may be larger if the original is a sparse file.


Bill Hassell, sysadmin
harry d brown jr
Honored Contributor

Re: scripting help

Why not do an incremental backup/copy?? and only copy those things that have changed since the last copy??

You are just copying oracle applications and not database files, right?

live free or die
harry
Live Free or Die
Shannon Petry
Honored Contributor

Re: scripting help

Another way to handle this is with an nfs mount, and cpio. I use this because it handles devices, permissions, directory structures, etc... While I dont normally do 30GB of data, Id be pretty sure it will still work. I.E.

mkdir /ora_tmp
mount -o ro srv1:/ora_base_dir /ora_tmp
cd /ora_tmp
find . -depth -print|cpio -padlmuxv /ora_base_dir

If you dont want to see the output of each copied file, remove the "v" from cpio options.

Regards,
Shannon
Microsoft. When do you want a virus today?