HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: cpio timing throughput
Operating System - HP-UX
1834935
Members
2236
Online
110071
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 08:30 AM
04-25-2007 08:30 AM
We need to migrate from one EMC DMX frame to another. The VGs were created with 28 PVs and also had the max_pv set to 28, so no more PVs can be added to the VG to utilize HP-Mirroring.
The next solution would be to create a new VG using disks from the new DMX frame, mount the lvols to a temporary moutpoint then copy:
find . -xdev -depth -print | cpio -pmluvd /
The questions management has is how long will there be no activity, that the app must be down, or rather, how long would this take.
Obviously there are too many variables to accurately say...it's really a best guess.
Is there a formula somehow to best guess throughput here?
A few parameters to consider...
ia64 hp server rx8640
8 CPUs, 4 Core Dual CPUs... 2.20 MHz(? could be off a bit on the speed, but not much)
32G mem
2GB thruput on the HBA
BUS speed
483G of data
Any input greatly appreciated.
The next solution would be to create a new VG using disks from the new DMX frame, mount the lvols to a temporary moutpoint then copy:
find .
The questions management has is how long will there be no activity, that the app must be down, or rather, how long would this take.
Obviously there are too many variables to accurately say...it's really a best guess.
Is there a formula somehow to best guess throughput here?
A few parameters to consider...
ia64 hp server rx8640
8 CPUs, 4 Core Dual CPUs... 2.20 MHz(? could be off a bit on the speed, but not much)
32G mem
2GB thruput on the HBA
BUS speed
483G of data
Any input greatly appreciated.
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 08:38 AM
04-25-2007 08:38 AM
Re: cpio timing throughput
Shalom,
EMC has a business copy (name may be wrong here) feature that will permit copying of data between arrays. This is likely to be MUCH faster than anything the OS can do.
You might be able to use the EMC array to copy everything across and then present the LUNS to your rx8640 server.
One thing you are right about. There is no way to predict in advance how long this will take. Perhaps see about a real world test at EMC or your local HP Performance Center.
SEP
EMC has a business copy (name may be wrong here) feature that will permit copying of data between arrays. This is likely to be MUCH faster than anything the OS can do.
You might be able to use the EMC array to copy everything across and then present the LUNS to your rx8640 server.
One thing you are right about. There is no way to predict in advance how long this will take. Perhaps see about a real world test at EMC or your local HP Performance Center.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 09:11 AM
04-25-2007 09:11 AM
Solution
At this point, about the only thing that you can quantify is a minimum time:
(483GByte * 8 Bit/1Byte) / 2GBit/sec = 1932 sec = 32.2 min
Now that assumes no overhead for protocol and no other constraints and is very unrealistic but it does represent the minimum possible time. Now I would expect actual transfer rates over the Fibre to be about half of that and I would also expect that disk and filesystem activity to increase it by about 6X so that a value of 12x the minimum possible is in the ballpark.
Of course, this is a lot like the Drake equation because there is a lot of "it depends" here.
One approach that would be MUCH faster than your proposed method would be to copy at the raw level. If each of your destination LVOL'S are at least as large as the corresponding source LVOL's then I would umount the filesystems and use dd. This will completely bypass the filesystem. If part of your goal is to also increase filesystem sizes then go ahead and make your destination LVOL's larger now, dd them, and then do the extendfs or fsadm -b to grow the filesystems after the transfer.
The nice thing about the dd method is that it is independent of the underlying file system configuration so that many small files, few large larges, or any combination will transfer equally fast --- and is thus testable.
Do something like this:
timex dd if=/dev/vg05/rlvol1 bs=1024k of=/dev/vg105/rlvol1
You could even do this on a live system as long as you don't actually use the xferr'ed data. You might try doing multiple simultaneous dd's to find the overall transfer rate sweet spot. Doing this on a live system has another advantage: you will actually be able to better the values when you stop the applications and umount the filesystems -- so the numbers you give to your management will be worst-case.
So rather than asking a bunch of guys on the 'Net about how long this will take, you can actually find out for yourself and have a value in which you have confidence.
(483GByte * 8 Bit/1Byte) / 2GBit/sec = 1932 sec = 32.2 min
Now that assumes no overhead for protocol and no other constraints and is very unrealistic but it does represent the minimum possible time. Now I would expect actual transfer rates over the Fibre to be about half of that and I would also expect that disk and filesystem activity to increase it by about 6X so that a value of 12x the minimum possible is in the ballpark.
Of course, this is a lot like the Drake equation because there is a lot of "it depends" here.
One approach that would be MUCH faster than your proposed method would be to copy at the raw level. If each of your destination LVOL'S are at least as large as the corresponding source LVOL's then I would umount the filesystems and use dd. This will completely bypass the filesystem. If part of your goal is to also increase filesystem sizes then go ahead and make your destination LVOL's larger now, dd them, and then do the extendfs or fsadm -b to grow the filesystems after the transfer.
The nice thing about the dd method is that it is independent of the underlying file system configuration so that many small files, few large larges, or any combination will transfer equally fast --- and is thus testable.
Do something like this:
timex dd if=/dev/vg05/rlvol1 bs=1024k of=/dev/vg105/rlvol1
You could even do this on a live system as long as you don't actually use the xferr'ed data. You might try doing multiple simultaneous dd's to find the overall transfer rate sweet spot. Doing this on a live system has another advantage: you will actually be able to better the values when you stop the applications and umount the filesystems -- so the numbers you give to your management will be worst-case.
So rather than asking a bunch of guys on the 'Net about how long this will take, you can actually find out for yourself and have a value in which you have confidence.
If it ain't broke, I can fix that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-26-2007 01:45 AM
04-26-2007 01:45 AM
Re: cpio timing throughput
Thx for the feedback, about what I expected. Another comment came in.....
Hit, start stopwatch, when prompt returns, hit stopwatch again...
However, I do like to 'dd' idea more and more now that I've thought about it a bit. If I can get a test setup relatively quickly today, I'll post results.
Hit
However, I do like to 'dd' idea more and more now that I've thought about it a bit. If I can get a test setup relatively quickly today, I'll post results.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP