1833759 Members
2146 Online
110063 Solutions
New Discussion

Re: bad command

 
SOLVED
Go to solution
matthew mills
Frequent Advisor

bad command

I have two HPUX servers and I am trying to PULL a directory from one server to another. I need the permissions and user:group to stay the same(the files in the dir our owned by diff users). Both systems have user:groups in sync.

I am using this command:
remsh ngbva 'cd /oradata;tar cf - /oradata' | tar xf -

It works great, but I am getting this error becouse the file is to darn big!
tar: Size of /oradata/VA/comp_1.dbf > 2GB. Not dumped.

Is there a better command to use? or how do I fix this?

Thanks in advance

7 REPLIES 7
Thierry Poels_1
Honored Contributor

Re: bad command

Hi,

you can use gtar (freeware) or you can search ITRC for the required patches so tar will handle files > 2GB.


good luck,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
A. Clay Stephenson
Acclaimed Contributor

Re: bad command

There is a patch for tar that will allow files up to 8GB; you can also download and install the Gnu version of tar that will handle files up tp 8GB. 2GB is the native limit for tar. Anything beyond that breaks compatibility of tar across platforms. Gnu ois a good choice because it is available for a wide spectrum of platforms.

http://hpux.connect.org.uk/hppd/hpux/Gnu/tar-1.13.25/

Plan B (and probably aq little faster and will handle files > 8GB): rather than a tar|tar pipeline, use an fbackup|frecover pipeline.
If it ain't broke, I can fix that.
matthew mills
Frequent Advisor

Re: bad command

I think I like PLAN B. CAn you give me an example?

Todd McDaniel_1
Honored Contributor

Re: bad command

I do belive rcp works with >2gb files... but you will have to create the directories...
Unix, the other white meat.
MRSG
Frequent Advisor

Re: bad command

Hi,
You can use rcp -rp /dir hostname:/dirname
Hope this helps,
Cheers
Geoff Wild
Honored Contributor
Solution

Re: bad command

Are you doing this often? that is do you want to sync them all the time?

Why not use rdist?

I do that for dr reasons for files that are non-SRDF...

I have a script that I run from cron daily:

#! /bin/sh

# Keep the DRP copy of the vgpadm up-to-date.
# Currently the files are in:
#
# /home/vgpadm
#
# See the rdist(1M) distfile for a list of exclusions.

DRPDIR=/app/admin/drp
DRPHOST=svr0032

mount | grep /home > /dev/null 2>&1
if [ $? -eq 0 ]
then
( su - vgpadm -c "rdist -f $DRPDIR/distfile vgpadm"; ) 2>&1 |\
tee $DRPDIR/drp.log 2>&1 |\
mailx -s "VGPADM DRP rdist output" me@mydomain.com
fi

The distfile contains:

VGPDR = ( pc0032 )

#
# File systems to be copied over to the DR host.
# Don't use -R in install - so as not to remove files on destination host
VGPADM = /home/vgpadm

vgpadm: ( ${VGPADM} ) -> ( ${VGPDR} )
install -w ;
except ${VGPADM}/logfiles;


Rgds...Geoff

Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Ken Stallings
Advisor

Re: bad command

Another utility that you can use is rsync. You will need to NFS mount the directory onto your server and run rsync. rsync will sync the two filesystems completely, including deleting files that have been deleted on the source filesystem. The great part is that after the origin sync, only the data that has changed will be transferred.