- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- copying tons of data
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2000 12:54 AM
тАО08-07-2000 12:54 AM
were copying gigabytes of data from one system to another periodically by disconnecting our disks and reconnecting them on a second system and importing the Volume groups. Then we copy all the data. Whats the fastest way to copy all the data over ??
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2000 01:02 AM
тАО08-07-2000 01:02 AM
SolutionYoure already doing well, the fastest way to copy tons of data like this is to reattach the disks on the remote server. Use as many SCSI connections as you can. If the VGs and lvols are the same size then nothing is faster than using dd if=/dev/vg00/rlvol of=
If you have different sized lvols then your using some copy command in which case if youve got JFS filesystems then mount them all with the nolog option, this will speed up the copy, and once again spawn off as many copy commands at once as you can (say 1 for each lvol being copied). Ive seen different cases where using cpio or find | cpio or cp is faster, you will need to test these in your case to see which is quicker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2000 01:08 AM
тАО08-07-2000 01:08 AM
Re: copying tons of data
Try using this cpio command to do the copy and free your systems.
find .(directory where you are copying from) -depth|cpio -ocB|remsh (servername) "cd /(directory to copy into); cpio -imdvcB"
Cheers!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2000 05:41 AM
тАО08-07-2000 05:41 AM
Re: copying tons of data
find . | cpio -pudlmv /target
(the puddle-move option) Leave off the v option if you don't want to see the individual files listed as they are copied. I always use the u option even if the target is empty as I once got burned by stopping a copy to verify operation then restarted it. All the existimg files were skipped as expected, but the last file copied on the first pass was only partially copied and cpio does not check file sizes, just the existenc of the file. The u option ensures that every file on the source will be copied to the destination.
For filesystems with large files, a dd copy will work although it will copy unused space as it has no knowledge of individual files. You could also use fbackup piped to frecover as a file-copy method.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2000 05:51 AM
тАО08-07-2000 05:51 AM
Re: copying tons of data
Bill is, as usual, correct. If your total amount of Used data is less than half the Total Assigned Data then it wouldnt be quicker to dd - as dd copies all blocks wether they be used or not. Normally a raw dd is twice as fast as a normal copy command but not if you arent using more than half of your available space.
eg. If you have a 9 Gb disk but are only using lvols on it totalling say 4 Gb of used data then it would be faster to use find | cpio to copy it, if you are using say 7 Gb of it then it would be faster to use raw dd.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2000 07:52 AM
тАО08-07-2000 07:52 AM
Re: copying tons of data
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2000 08:00 AM
тАО08-07-2000 08:00 AM
Re: copying tons of data
/usr/sbin/vxdump 0f - /dev/vgXX/lvXX | remsh
This will dump the data from one system to another without having to disconnect/swap disks between systems.
If using HFS filesystems, use the command 'dump' and 'restore' instead of 'vxdump' and 'vxrestore'.
NOTE: This will not dump to the tape device. It will dump to the disk device specified.