- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Storage Migration
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 03:34 PM
тАО04-24-2006 03:34 PM
Need some help with data migration from old EMC array to new one with higher capacity. Here is the current setup.
* 50 luns, each 65G in size
* 5 LVs
* Max PV - 60
* Max PE per PV - 4186
* PE size - 16 Mb
What are my options to migrate to new storage with large size luns (number of luns unknown at point). The requirement is to double the size of the each LV in the new storage.
1) pvmove cannot be used, as the volumes are striped across multiple luns.
2) Using lvextend/lvreduce will end up under utilzing the lun space due to Max PE/PV and only 10 more luns can be added with Max PV set to 60.
Please validate (1) and (2) statements. If this is true, what other options I can use to migrate to new storage. Thanks for your time.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 04:02 PM
тАО04-24-2006 04:02 PM
Re: Storage Migration
Otherwise, you are looking at creating new vg(s) and copying data (I prefer vxdump/vxrestore).
vxdump -0 -f - -s 1000000 -b 16 /tempmountofold/oracle | (cd /oracle ; vxrestore rf -)
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 04:17 PM
тАО04-24-2006 04:17 PM
Re: Storage Migration
the "Max PE per PV" limited the PV size, you need to create a new VG with new parameters if you will have larger LUN in the new storage.
GOOD LUCK.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2006 01:49 AM
тАО04-25-2006 01:49 AM
Re: Storage Migration
I would create the new volume groups on the new EMC, calling them something slightly different, disk to disk copy, or move the data to new volume groups. Remove the old volume groups and rename the new ones to the same as the old ones.
Means you don't need to backup to tape.
Just a thought.
Regards,
JASH
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2006 02:47 AM
тАО04-25-2006 02:47 AM
Re: Storage Migration
Here are the 2 options I tried and use between EMC arrays. In your case use option 2
1. As told by Geoff use lvm based mirroring by doing lvextend -m 1 (This is applicable only if you have not reached max pe per vg.
or
2. This I did last week to refresh between two arrays (Sym and Clar). Mounted and just copied using cp -pr.
One constriant in that is files more than 2GB needs to be copied by cp -p seperately.
I did for about 5 TB of Data. This will take about 36 Hrs.
3. You can use dd (Will go for this If i have raw volumes)
4. backup and restore
But i don't believe 4
Ensure that your DB/applications are shutdown.
All the best
Chan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2006 03:24 PM
тАО04-25-2006 03:24 PM
Re: Storage Migration
I have addional questions based on your inputs.
3) What is the command syntax for vxdump/vxrestore (Geoff - can you pls provide for following case)
Current f/s - /aaa/u01
New f/s - /bbb/u01
4) Which is the best method (reliable, speed,etc) - vxdump/cpio/cp (I do have files around 2Gb in size)
5) Is there any other tool (third party) available for migration (source and destination are EMC array).
Thanks again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2006 04:59 PM
тАО04-25-2006 04:59 PM
Re: Storage Migration
vxdump -0 -f - -s 1000000 -b 16 /aaa/u01 | (cd /bbb/u01 ; vxrestore rf -)
4 TB - that is going to take a while...depending on frame and san and server - anywhere from 6 to 18 hours...
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2006 07:30 PM
тАО04-25-2006 07:30 PM
Solution1. Create new VG. Stick with a large extent size, and make sure you increase max pe per pv and maxpv this time. Set max pv to 256, then increase pe per pv as high as you can go.
2. Create new logical volumes. They must be at least as large as the source volumes, but can be larger if you want.
3. Shutdown Oracle/unmount file systems
4. dd copy, using this format:
dd if=/dev/vgold/lvold of=/dev/vgnew/lvnew bs=256k
(without bs=256k, it will default to a 512 byte block size and take forever. With 256k, it usually runs pretty fast.)
5. After dd's are done, extend filesystems:
extendfs -F vxfs /dev/vgnew/lvnew
6. Mount new lv's at old mount points, and validate the migration. Then fix /etc/fstab and delete the old vg. Rename the new vg if you want to keep that name.
If you want to get this really moving, run the dd's in parallel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2006 09:04 PM
тАО04-25-2006 09:04 PM
Re: Storage Migration
I think between Clariion arrays you can use sancopy, I have never tried this & don't know if this will work. I used it within one array. But the primary copy takes lots of time per TB. May be it will take about 24 to 30 Hrs I think.
Chan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-26-2006 02:56 AM
тАО04-26-2006 02:56 AM
Re: Storage Migration
EMC has the Open Migrator product that would work. The software runs on HPUX and will migrate between different types of arrays.
They also sell Open Replicator that runs on a DMX, but is similar to San Copy.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-01-2006 02:02 AM
тАО05-01-2006 02:02 AM
Re: Storage Migration
Scott - How do I monitor the progress of dd, like how far the copy has been done,etc. Also, I did some testing by copying data in a 160Mb volume, here are the metrics.
bs=256k -> 23.4 secs
bs=512k -> 23.08 secs
bs=1024k -> 20.1 secs
The above is average of tests done multiple times with different bs values. I will be doing the actual migration in few days, will post the results in the forum.