1747984 Members
4600 Online
108756 Solutions
New Discussion юеВ

Re: Copy 50GB data

 
SOLVED
Go to solution
Olivier Masse
Honored Contributor

Re: Copy 50GB data

What kind of data are you copying? If you're talking about thousands of files and only a small percentage of them actually change from one time to another, use rsync and you'll save lots of time. Just rsync once, stop your application, then rsync again to copy your delta, and restart your app on the new filesystem.
hp_user_1
Regular Advisor

Re: Copy 50GB data

Thousands of files, and maybe 20% of them change frequently....
hp_user_1
Regular Advisor

Re: Copy 50GB data

Need to copy one directory structure only, and not the whole file system.`
sujit kumar singh
Honored Contributor
Solution

Re: Copy 50GB data

hello,

as this is local copy even cp and cpio, tar shall take a lot of time to backup or copy or restore.

dd or vdump as i feel u cannot use in this case as u are not to copy an entire LV or a Mounted LV.

dd would have been surely faster with an optimal and big Block Size specified.

i think u can go with the help of a fbackup and frecover with a config file. That shall help u open multiple readers processes in the memory and also help u make the use of fastness of fbackup by specifying a big block size hopefully.

cant tell of other backup softwaRE but Data Protector would have been really handy in this case.


however, using fbackup with a config file, u can achieve a better performance in backup and restoration, u can try like this.


u can have a look at the thread given below and read the post of JRF

http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1232392329851+28353475&threadId=1296457\


and see this thread

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=815533

see the last as Bob Brown says:

Feb 17, 2005 11:56:37 GMT N/A: Question Author

--------------------------------------------------------------------------------
Well...I experimented and got MUCH better results. I'm up to >40GB/hour right now.

I set blocksperrecord to 8192,
records to 64,
checkpointfreq to 4096

Those made the biggest difference.

-Bob




u can achieve this by creating a config file for fbackup like this


#vi /tmp/fback.config

blocksperrecord 4096 ---- or 8192 ..... bob got a good result almost 40 GB /hr on Ultrium tape
records 64
checkpointfreq 4096
readerprocesses 6
maxretries 5
retrylimit 5000000
maxvoluses 200
filesperfsm 2000

save and exit.

then use this config file to take a backup.

#fbackup -0vf /dev/rmt/0m -c /tmp/fback.config -i /


here 0 mens full backup.
-f /dev/rmt/om is he device file of the Tape Drive.
-c /tmp/fnack.config is the config file.
-i / is the directory u wish to take a backup.



and while restoring the same,

rewind the tape:

#mt -t /dev/rmt/0m rewind
go to the directory u wish to recover

#cd /test


say


#frecover -rvX -f /dev/rmt/0m -c /tmp/fback.config

or #frecover -xvX -f /dev/rmt/0m -c /tmp/fback.config -i /

-r is for recover all from the tape
-x is for selective recover of files.
-v verbose
-X is for recover to current directory instead of recovering to the original location.

thanks JRF abd Bob for the informations...

Regards
Sujit
sujit kumar singh
Honored Contributor

Re: Copy 50GB data

Hi


also make sure that ur system running 11.11 has the latest patches for fbackup, that will be an added advantage ... u can go to the itrc .. patches ... select .... 11.11 ...search by keywords "fbackup", downlaod and install that later on also ...

that helps fbackup exceedingly enhance the functionality with the New Ultrium Drives and Media.

however u can do that later on also ..


Sujit
Peter Nikitka
Honored Contributor

Re: Copy 50GB data

Hi,

you described the task as a one time job.
Using temporary storage is - in my feeling - not the adequate technique for copying files from one file system to another (on the same server!):
Using temporary storage involves an additional read and write operation. This overhead cannot be regained, IMHO.

I still think,
find ... | cpio ...
is hardly to beat here.

mfG Peter
The Universe is a pretty big place, it's bigger than anything anyone has ever dreamed of before. So if it's just us, seems like an awful waste of space, right? Jodie Foster in "Contact"
sujit kumar singh
Honored Contributor

Re: Copy 50GB data

Hi


Thats being true, that the most suitable command for such operations is definitely the combination of find and cpio

that is '

#find ..... | cpio ....


and that is really good for the fileystem copy as we cannot use dd, and vdump here ...and also that cp is slower.


yes the #find .... | cpio ....


that is also a really recommendable solution here as surely that will not require any third storage and that this is more flexible too.

better u can as suggested also give a try to cpio also.

I fully agree and that is more flexible too.

Regards
Sujit
hp_user_1
Regular Advisor

Re: Copy 50GB data

I tried "cd source; find * | nice --2 cpio -pudlvm /destination" and it worked much faster than cp, but still I did not get the speed I was looking for. I got about 8GB / hour .... I need to copy the 50GB in less than 3 hours in prod window ....

With fbackup/frecover, can I fbackup files and pipe the output to frecover to restore in destination without using tape?
sujit kumar singh
Honored Contributor

Re: Copy 50GB data

Hi

http://forums13.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1232412006748+28353475&threadId=1047319


see the post of marvik ::: U can use also dd with an optimal blocksize::


just FYI incase u want to copy large FS in the same server use:
umount /fs1 (X series)
umount /fs2 ( Y series)

dd if=/dev/vgxx/rlvolx of=/dev/vgyy/rlvoly bs=1024k



see another in the same post by marvik

# cd srcdir && fbackup -i . -f - | ( cd dstdir && frecover -Xsrf - )

Now its ur call,u have many options :)


also see the paost from JRF


Regards
sujit
Steven Schweda
Honored Contributor

Re: Copy 50GB data

> I tried "cd source; find * | [...]

"find ." might be smarter than "find *".

I'd probably use a "tar" (or "pax") pipeline,
but I haven't run any speed tests, so I
don't know if it would be faster than cpio or
anything else.

A Forum search for "tar pipeline" should find
many examples in previous discussions.