Operating System - HP-UX
1834065 Members
3030 Online
110063 Solutions
New Discussion

Re: Need to synchronize filesystems with huge size

 
Vishal Ranjan
Occasional Advisor

Need to synchronize filesystems with huge size

Hello,

I am using IXOS 3rd party utility to maintain invoices (images) in my client's SAP system. We have a DR (Disaster Recovery) setup & for that I need to synchronize my /ixos filesystem (with size more than 50 GB) between two servers say x182 & x050.

Initially I thought to tar the directory then rcp it & then un'tar it at destination but this method is giving me problems related to storage of the tar file & the time it will take it to rcp the huge tar file.

I tried RDIST, but that also takes lot of time (around 5-6 hours).
Can anybody suggest me any other solution to this problem?

Regards,
Vishal
7 REPLIES 7
piyush mathiya
Trusted Contributor

Re: Need to synchronize filesystems with huge size

Vishal,
Might you can use NFS to synchronize. it is the best method to reduce time and use to work over the network

Regards,
Piyush Mathiya
Rasheed Tamton
Honored Contributor

Re: Need to synchronize filesystems with huge size

Hi Vishal,

Did you try the rsync.

Regards,
Rasheed Tamton.
Rasheed Tamton
Honored Contributor

Re: Need to synchronize filesystems with huge size

If you do not have, you can download rsync form here:

http://hpux.cs.utah.edu/hppd/hpux/Networking/Admin/rsync-2.6.9/

Regards.
Vishal Ranjan
Occasional Advisor

Re: Need to synchronize filesystems with huge size

Hi Rasheed,

How differrent is RSYNC from RDIST?

- Vishal
Rasheed Tamton
Honored Contributor

Re: Need to synchronize filesystems with huge size

Hi Visahal,

See the product description of both below. If you want to sync, rsync is the right one.

Rdist
A remote file distribution system. Rdist maintains copies of files on multiple hosts. Files updated at one host are distributed to a specified set of hosts. Shell commands can be specified to run on the remote hosts to perform file installation tasks. A typical use for rdist would be that of maintaining identical copies of selected system files across a network of workstations.
---
A replacement for rcp that has many more features. Rsync uses an algorithm which provides a VERY FAST METHOD FOR BRINGING REMOTE FILES INTO SYNC. It does this by sending just the differences in the files, optionally with compression, across the link, without requiring that both sets of files are present at one of the ends of the link beforehand.

Regards,
Rasheed Tamton.
Yogeeraj_1
Honored Contributor

Re: Need to synchronize filesystems with huge size

Hi Vishal,

As mentioned by previous contributors, RSYNC would be the tool of choice.

>Initially I thought to tar the directory then rcp it & then un'tar it at destination but this method is giving me problems related to storage of the tar file & the time it will take it to rcp the huge tar file.


You should have compressed the TAR file for better performance.

e.g. gzip -S .Z .TAR

other tools like bzip2 may result in better compression as well.

hope this helps too!
kind regards
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Steven Schweda
Honored Contributor

Re: Need to synchronize filesystems with huge size

> Initially I thought to tar the directory
> then rcp it & then un'tar it at destination
> but this method is giving me problems
> related to storage of the tar file & the
> time it will take it to rcp the huge tar
> file.

> You should have compressed the TAR file for
> better performance.
>
> e.g. gzip -S .Z .TAR
>
> other tools like bzip2 may result in better
> compression as well.

There is no need to _store_ the "tar" file or
even a compressed "tar" file. Assuming that
you can arrange for a password-free log-in
(remsh or ssh, or similar), then you should
be able to construct a pipeline to do the
job, and never store the "tar" stuff
anywhere, compressed or not.

> I tried RDIST, but that also takes lot of
> time (around 5-6 hours).

If you already have some of the data on the
destination system, then I'd expect rsync
(or, perhaps, rdist) to be useful. If the
destination system begins with none of the
data, then I'd use a "tar" pipeline to move
the whole mess at once.

In my experience, bzip2 gets better
compression than gzip, but uses more CPU.
You might need to run an experiment to see
whether your bottleneck is in the CPU(s) or
the network.

For example (assuming that the destination
directory already exists on system "dest"),
using remsh, with no compression:

( cd /src ; tar cf - . ) | \
remsh dest ' ( cd /dst ; tar xf - ) '

With gzip compression:

( cd /src ; tar cf - . | gzip -c ) | \
remsh dest ' ( cd /dst ; gzip -dc | tar xf - ) '

For bzip2 compression, replace "gzip" above
with "bzip2".) For ssh instead of remsh,
I'll let you guess.

Note that the standard HP-UX "tar" program
may have more problems than, say, GNU "tar".
HP's "pax" may do better then HP's "tar".

This sort of "tar" pipeline has been used
almost as long as "tar" has existed. It's a
continuing source of amazement to me that
people in this forum still suggest storing
a "tar" file somewhere for a job like this.