1820879 Members
3548 Online
109628 Solutions
New Discussion юеВ

Storage Migration

 
SOLVED
Go to solution
SKR-hp
Occasional Contributor

Storage Migration

Hello All,

Need some help with data migration from old EMC array to new one with higher capacity. Here is the current setup.

* 50 luns, each 65G in size
* 5 LVs
* Max PV - 60
* Max PE per PV - 4186
* PE size - 16 Mb

What are my options to migrate to new storage with large size luns (number of luns unknown at point). The requirement is to double the size of the each LV in the new storage.

1) pvmove cannot be used, as the volumes are striped across multiple luns.

2) Using lvextend/lvreduce will end up under utilzing the lun space due to Max PE/PV and only 10 more luns can be added with Max PV set to 60.

Please validate (1) and (2) statements. If this is true, what other options I can use to migrate to new storage. Thanks for your time.
10 REPLIES 10
Geoff Wild
Honored Contributor

Re: Storage Migration

If you have rooom in the VG - you could mirror the data across the frames...then remove the old pv's...

Otherwise, you are looking at creating new vg(s) and copying data (I prefer vxdump/vxrestore).

vxdump -0 -f - -s 1000000 -b 16 /tempmountofold/oracle | (cd /oracle ; vxrestore rf -)


Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Warren_9
Honored Contributor

Re: Storage Migration

hi,

the "Max PE per PV" limited the PV size, you need to create a new VG with new parameters if you will have larger LUN in the new storage.

GOOD LUCK.
JASH_2
Trusted Contributor

Re: Storage Migration

SKR,

I would create the new volume groups on the new EMC, calling them something slightly different, disk to disk copy, or move the data to new volume groups. Remove the old volume groups and rename the new ones to the same as the old ones.

Means you don't need to backup to tape.

Just a thought.

Regards,

JASH
If I can, I will!
Chan 007
Honored Contributor

Re: Storage Migration

Hi,

Here are the 2 options I tried and use between EMC arrays. In your case use option 2

1. As told by Geoff use lvm based mirroring by doing lvextend -m 1 (This is applicable only if you have not reached max pe per vg.

or

2. This I did last week to refresh between two arrays (Sym and Clar). Mounted and just copied using cp -pr.
One constriant in that is files more than 2GB needs to be copied by cp -p seperately.
I did for about 5 TB of Data. This will take about 36 Hrs.

3. You can use dd (Will go for this If i have raw volumes)
4. backup and restore
But i don't believe 4

Ensure that your DB/applications are shutdown.

All the best

Chan
SKR-hp
Occasional Contributor

Re: Storage Migration

Thanks everyone for your answers. We do not have space available in the VG. This is a oracle db server, the total data to be moved is around 4Tb, its all file systems (no raw volumes). From your input, it looks like I cannot use pvmove or mirror due to the constraints with Max PE/PV and Max PV.
I have addional questions based on your inputs.

3) What is the command syntax for vxdump/vxrestore (Geoff - can you pls provide for following case)
Current f/s - /aaa/u01
New f/s - /bbb/u01

4) Which is the best method (reliable, speed,etc) - vxdump/cpio/cp (I do have files around 2Gb in size)

5) Is there any other tool (third party) available for migration (source and destination are EMC array).

Thanks again.
Geoff Wild
Honored Contributor

Re: Storage Migration

Command would be:

vxdump -0 -f - -s 1000000 -b 16 /aaa/u01 | (cd /bbb/u01 ; vxrestore rf -)

4 TB - that is going to take a while...depending on frame and san and server - anywhere from 6 to 18 hours...

Rgds...Geoff

Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Scott Riley
Valued Contributor
Solution

Re: Storage Migration

There are many ways to do this. The fastest, most reliable method IMO is using dd. Have done this countless times on EMC arrays. Your steps should be:

1. Create new VG. Stick with a large extent size, and make sure you increase max pe per pv and maxpv this time. Set max pv to 256, then increase pe per pv as high as you can go.
2. Create new logical volumes. They must be at least as large as the source volumes, but can be larger if you want.
3. Shutdown Oracle/unmount file systems
4. dd copy, using this format:

dd if=/dev/vgold/lvold of=/dev/vgnew/lvnew bs=256k

(without bs=256k, it will default to a 512 byte block size and take forever. With 256k, it usually runs pretty fast.)

5. After dd's are done, extend filesystems:

extendfs -F vxfs /dev/vgnew/lvnew

6. Mount new lv's at old mount points, and validate the migration. Then fix /etc/fstab and delete the old vg. Rename the new vg if you want to keep that name.

If you want to get this really moving, run the dd's in parallel.
Chan 007
Honored Contributor

Re: Storage Migration

Hi,

I think between Clariion arrays you can use sancopy, I have never tried this & don't know if this will work. I used it within one array. But the primary copy takes lots of time per TB. May be it will take about 24 to 30 Hrs I think.

Chan

Scott Riley
Valued Contributor

Re: Storage Migration

EMC San Copy wil get the data over fine, but only if one of the arrays is a Clariion, and as long as the target luns are at least as big as the source luns. San Copy will work between different array models, but the software runs on a Clariion, so you'll need one of those.

EMC has the Open Migrator product that would work. The software runs on HPUX and will migrate between different types of arrays.

They also sell Open Replicator that runs on a DMX, but is similar to San Copy.
SKR-hp
Occasional Contributor

Re: Storage Migration

Thanks again to you all for the information.

Scott - How do I monitor the progress of dd, like how far the copy has been done,etc. Also, I did some testing by copying data in a 160Mb volume, here are the metrics.

bs=256k -> 23.4 secs
bs=512k -> 23.08 secs
bs=1024k -> 20.1 secs

The above is average of tests done multiple times with different bs values. I will be doing the actual migration in few days, will post the results in the forum.