Operating System - HP-UX
1822040 Members
3398 Online
109640 Solutions
New Discussion юеВ

using find and tar to archive the last 12 hours

 
SOLVED
Go to solution
Todd McDaniel_1
Honored Contributor

using find and tar to archive the last 12 hours

Hi,

I haven't used tar in a while and I am a little rusty.

I need to know how to use find and tar to archive any files that have changed in the last 12 hours. I think it would go something like this.

find * -mtime +1 | tar -cvf /dev/rmt/0mn

Any help is greatly apprecitated.
Unix, the other white meat.
14 REPLIES 14
Pete Randall
Outstanding Contributor

Re: using find and tar to archive the last 12 hours

Todd,

That would give you 24 hours. If you really want 12, try touching a file to give it the appropriate reference time and using the -newer option on your find.


Pete

Pete
Robin Wakefield
Honored Contributor

Re: using find and tar to archive the last 12 hours

Hi,

Try the following:

touch -t 0303120300 /tmp/ref
find . -newer /tmp/ref -type f | xargs tar cvf /dev/rmt/0m

rgds, Robin
Todd McDaniel_1
Honored Contributor

Re: using find and tar to archive the last 12 hours

Thanks. I am trying to backup several filesystems since my last incremental because we are doing some EMC work and there are too many files to touch them all.

I can live with 24 hours.

My other question is can I create a tar ball across filesystems. OR do I need to done one for each seperate filesystem?

Here, is what I have. Don't be taken aback by the sizes. These are fairly static application filesystems.

# bdf|grep appl
/dev/vgappl/finapps
26419200 9320428 16834632 36% /appl/finapps
/dev/vgappl/stage 35225600 16884648 18197672 48% /appl/finapps/stage
/dev/vgappl/common 26419200 4792480 21288824 18% /appl/finapps/common
/dev/vgappl/archive
8806400 3230 8528078 0% /appl/finapps/archive
/dev/vgappl/appsdb 35225600 26866752 8293576 76% /appl/finapps/appsdb
/dev/vgappl/appsclnt
88064000 35085464 52564680 40% /appl/finapps/appsclnt
Unix, the other white meat.
John Dvorchak
Honored Contributor
Solution

Re: using find and tar to archive the last 12 hours

I would try a find -newer as suggested. The syntax would be something like this:

tar -cvf test.tar `find . -type f -newer tfile`

the "tfile" is the file that was touched with the reference time.

good luck
If it has wheels or a skirt, you can't afford it.
John Dvorchak
Honored Contributor

Re: using find and tar to archive the last 12 hours

Ok my first responce was a little late. These guys are fast! From the man page:
Use tar in a pipeline to copy the entire file system hierarchy under
fromdir to todir:

cd fromdir ; tar cf - . | ( cd todir ; tar xf -i )

A variation on the above to find the files first should work.
If it has wheels or a skirt, you can't afford it.
Darren Prior
Honored Contributor

Re: using find and tar to archive the last 12 hours

Hi,

I don't see an issue with having a tarball from across all those filesystems. As you've already said they're fairly static filesystems so there won't be much changed in the last 12 hours. If you are overly concerned though you can use -xdev with find to stop it crossing filesystems: just repeat for each filesystem with the appropriate path for find.

btw - I trust you are aware of tar's 2gb max limit per file - see the tar man page for info.

regards,

Darren.
Calm down. It's only ones and zeros...
Todd McDaniel_1
Honored Contributor

Re: using find and tar to archive the last 12 hours

Thanks all for your replys. I believe I will try John D.'s tfile approach to compare and backup according to that value.

I think I will do 1 tarball if I can get them all in 12GB on a DDS3. I can't believe that it would be more than that for just the last 12 hours or even less. My backup completes around 2am usually.


Darren,

The 2GB file restriction is for "FILES" not for the TARball, right? hehe, it has been too long since I have used tar.

What will happen if TAR runs across a file that is larger than 2GB? Will it ignore the file? or fail the tar?
Unix, the other white meat.
Darren Prior
Honored Contributor

Re: using find and tar to archive the last 12 hours

Todd,

the restriction is on the files not the tarball. I'm not 100% sure of the result of including a greater than 2gig file in a tarball; to be safe I'd suggest you use the -size option of find to discover if you actually have any large files in that 12 hour period, then use -size in the actual find/tar command to ensure you don't back them up. Use another method on those files.

regards,

Darren.
Calm down. It's only ones and zeros...
A. Clay Stephenson
Acclaimed Contributor

Re: using find and tar to archive the last 12 hours

The "out of the box" tar won't handle a file larger than 2GB. Depending upon your OS there are patches available to allow tar to handle larger files but your best answer (if your are going to use tar) is to downloan and install the Gnu version of tar available from any of the HP-UX Porting Centre's. If this is all HP-UX, I would suggest that you take a look at fbackup/frestore. It will handle the incrementals for you, natively do the large files, and is much faster than tar or cpio.
If it ain't broke, I can fix that.
A. Clay Stephenson
Acclaimed Contributor

Re: using find and tar to archive the last 12 hours

Ooops, that's frecover not frestore.
If it ain't broke, I can fix that.
Todd McDaniel_1
Honored Contributor

Re: using find and tar to archive the last 12 hours

There are only 4 files larger than 2GB. And they are old compressed files so they will not be a factor.

Thanks everyone for your input.
Unix, the other white meat.
Todd McDaniel_1
Honored Contributor

Re: using find and tar to archive the last 12 hours

Clay,

I am using Netbackup for my regular backups. This is only a onetime backup for some activity that occurs after the incremental runs the night before.

We are doing this extra backup b/c we have some massive EMC work on the frame attached and are just making sure that we have CYA covered.
Unix, the other white meat.
Chris Vail
Honored Contributor

Re: using find and tar to archive the last 12 hours

Here's a quick script that we're using:
#!/usr/bin/ksh
SDIR=/d166/oradata
DDIR=/d166b/oradata

if test -f $SDIR/nohup.out
then
rm $SDIR/nohup.out
fi

if test $DDIR
then
cd $SDIR
/usr/local/bin/tar cvf - . |(cd $DDIR;/usr/local/bin/tar xvfp -)
fi

The /usr/local/bin version of tar is gnu tar, from the HP porting people. We have to use this because our .dbf files are HUGE. We have a different script for each mount point--this allows us to run all 8 at the same time. Of course, this kills the CPU's, but who cares? The database is down at this point.

Of course, this doesn't use find to locate the files, this just copies off the whole filesystem. To include find, just below the cd $SDIR line, write:
for FILE in `find . -mtime 1`
do
tar cvf - $FILE|(cd $DDIR;/usr/local/bin/tar xvfp -)
done

This will move the whole directory tree, not just create a tarball.

Chris
Todd McDaniel_1
Honored Contributor

Re: using find and tar to archive the last 12 hours

just to follow up. using the a file with a "newer" option worked great!

I backed up only those files that were modified since my backup the night before. i used the touch command to create a file for the tar to run and compare the files against.
Unix, the other white meat.