1825465 Members
3839 Online
109681 Solutions
New Discussion юеВ

Migrating to New Storage

 
SOLVED
Go to solution
Sial_1
Frequent Advisor

Migrating to New Storage

Hi,

Current Configuration:-
Two Node Cluster (Service Guard), HP-UX 11 v2, Shared Storage EVA4000.

We are replacing storage EVA4000 with EVA8100. In current configuration all filesystems (except VG00) are configured on old EVA4000. Now we migrate all filesystems form Old EVA4000 to New EVA8100.
Could someone help me to know, how I perform this activity, mean to say which disk configuration related Service Guard files or which parameters would be changed.

Thanks in advance
15 REPLIES 15
Ivan Krastev
Honored Contributor

Re: Migrating to New Storage

Check for OnlineJFS product. You can do this with simple LVM mirroring - create a mirror over new EVA8100 and remove the mirror from the old one.

After the re-import all VG on the second node.

regards,
ivan
likid0
Honored Contributor

Re: Migrating to New Storage

You can use mirrors to copy the data if you have MirrorDisk/UX(if you have SG you should have Mirrordisk), where your vgs are active, then vgexport/import on the node where they are not active

You have to be carefull with your cluster lock disk if you are using one.

once you have migrated you need to stop the cluster and change the FIRST_CLUSTER_LOCK_PV in you cluster ascii config file, then cmmaplyconf your modified ascii file.

I think with 11.19 you can change the cluster lock with out stoping the cluster
Windows?, no thanks
Jorge Pons
Trusted Contributor

Re: Migrating to New Storage

HI

Another way to do is using EVA Continuos Access, from Command View EVA or RSM Software.
both EVAs must see in Fabric. You create a replica from EVA4000 to EVA8100, when it finish, unplug HPUX from EVA 4000 and plug into EVA 8100 (or zoning)
For your security, from HPUX do an "vgexport -p -s -v -m /tmp/vg0x.map
/dev/vg0x" to have the maps.

Regards

Sial_1
Frequent Advisor

Re: Migrating to New Storage


we are planning to do this activity using Mirror disks.
Ganesan R
Honored Contributor

Re: Migrating to New Storage

Hi,

Normally storage migration comprise these steps.

1.Assign LUN's from new storage to both nodes.

2. Add the new luns to respective volume groups by vgextend.

3.Use mirror or pvmove to copy the data from old storage luns to new luns. Prefer to go for mirror. pvmove is always little risky

4. Remove the mirror copy from old storage luns

5. vgreduce the old storage luns from respective volume groups

6. Remove the device files and unpresent from the old storage.

7.take the map file and import on other nodes

8. reinitialise the cluster lock disk.
Best wishes,

Ganesh.
Sial_1
Frequent Advisor

Re: Migrating to New Storage



7.take the map file and import on other nodes
How to do this step?

Is there no need to do any changing in any service guard configuration files?
Jorge Pons
Trusted Contributor

Re: Migrating to New Storage

Hi

Mirror is the most security way.
-Create new luns in EVA8100
-Present to both hosts
-in the hpux have packet (vg), run ioscan and insf
-lvextend -m 1 /dev/vg01/lvol1
/dev/dsk/newdisk
When it finish
- lvreduce -m 0 /dev/vg01/lvol1
/dev/dsk/olddisk
-revise the device files in other node before change packet.
regards
Stephen Doud
Honored Contributor

Re: Migrating to New Storage

The easiest method is to use Continuous Access software to copy the data bit-for-bit to the new LUNs and then export the original VGs and import the new ones.

The next easiest method is to vgextend the new LUNs into current VGs, mirror the volumes in the VGs to the new LUNs and then reduce the old array LUNs out of the VG

The most complex method is to create new VGs, volumes and file systems and copy the data from the old volumes to the new ones, and then remove the old storage references and linkage.

Which method do you think you will be using?
Jorge Pons
Trusted Contributor

Re: Migrating to New Storage

to take the maps:

vgexport -p -s -v -m /tmp/vg0x.map

-p: is only preview, not delete.
-s: take ssid from disk, secure for to take another disk
-v: verbose
-m: map file

johnsonpk
Honored Contributor

Re: Migrating to New Storage

Hi Sial,

the steps in nut shell..

Allocate LUN from the New storage to both nodes

If you have mirror-Ux installed on the server(I think It should be since You have HA OE)

1)Do pvcreate on new disks
2)extend you san vgs to new disks
3)create a mirror copy on the new disk for all logical volumes
#lvextend -m 1 /dev//


4)reduce the mirrorcopy from old disk
#lvreduce -m 0 /dev/vgname>/



5)Repeat step 3 & 4 for all lvols in the vg
6)repeat the same steps for all vgs

At this point you need to bring down you cluster

edit the current cluster ascii file and change the cluster lock disk path or create a cluster ascii file from the runnig
cluster by cmgetconf command and edit the lock disk info

7)reduce the VG from the old disk
#vgreduce

then export the vg in preview mode and create the map file
#vgexport -p -v -s -m
copy the mapfile to the second node
Import the vg by using newly created the mapfile

check and apply the cluster configuration

Thanks!!
Johnson
Ganesan R
Honored Contributor

Re: Migrating to New Storage

Hi,

I was in a assumption that you are familier with LVM since you are working with service guard. That is the reason why I just provided the brief steps and not the commands in details for those steps.

Ok.come to your question.
>>7.take the map file and import on other nodes
How to do this step?<<

As people said already, use vgexport command in preview mode to just take the map file then vgimport command to import it on other node. Here is a example to take map file of vg01 on node1 and import it on node2(Assuming vg01 is currently active on node1)

node1#vgexport -p -v -s -m /tmp/vg01.map vg01

now copy this map file to node2
node1#scp /tmp/vg01 node2:/tmp

On node2 remove the existing configuration of vg01 and import it with new map file. Also note down the minor number vg01 before removeing.
node2#ll /dev/vg01/group
node2#vgexport -v /dev/vg01
node2#mkdir /dev/vg01
node2#mknod /dev/vg01/group c 64 0x010000
node2#vgimport -v -s -m /tmp/vg01.map vg01


>>>Is there no need to do any changing in any service guard configuration files?<<<

No need of any changes on cluster configuration file except cluster lock disk. You need to modify the cluster lock device and put the cluster lock information in new cluster lock device.
Best wishes,

Ganesh.
RajuD
Frequent Advisor
Solution

Re: Migrating to New Storage

Hi,

Everyone has given right direction for your question,i have done this activity before best method is going for mirroring Mr.john has given step by step procedure.

I like to add one more thing

step 1

Take backup of strings /etc/lvmtab file

Step 2

Take backup of cluster configuration file /etc/cmcluster/*

Step 3

Take backup of #ioscan -fnC disk output.

Step 4

Take backup all vg* information and vgid information by executing the below command

#xd -j8200 -N16 -tu

For example

#xd -j8200 -N16 -tu /dev/rdsk/c4t0d1

Step 5

Assign LUN's from new storage to both nodes.

Step 6

Check the status the news luns are getting detecting in the both nodes. by comparing the ioscan output which you have taken.

Step 7

pvcreate

Step 8

Check whether lv's are stripped before adding disk to vg. Before steps works fine for non-stripped lv's

Allocate the diskto vg by using vgextend.

#vgextend vg00


Step 9


Do mirroring of lv's in the vg.

lvextend -m 1

for ex:
lvextend -m 1 lvol1 /dev/dsk/c4t0d1

Continue the above steps for all remaning lvs

Step 10

Check the mirror status of all LV's

#lvdispaly
You will get the below output and check the mirror copies column it should be 1

# lvdisplay /dev/vg00/lvol3
--- Logical volumes ---
LV Name /dev/vg00/lvol3
VG Name /dev/vg00
LV Permission read/write
LV Status available/syncd
Mirror copies 1


Step11

Once you confirmed the its mirrored the remove the old disk from the vg.

lvreduce file

Do for all remaining lv's


Step 13
Follow the steps provided by Mr.Ganesh

node1#vgexport -p -v -s -m /tmp/vg01.map vg01

now copy this map file to node2
node1#scp /tmp/vg01 node2:/tmp

On node2 remove the existing configuration of vg01 and import it with new map file. Also note down the minor number vg01 before removeing.
node2#ll /dev/vg01/group
node2#vgexport -v /dev/vg01
node2#mkdir /dev/vg01
node2#mknod /dev/vg01/group c 64 0x010000
node2#vgimport -v -s -m /tmp/vg01.map vg01

Step 14

If your cluster contains lock disk you need to change the device file to check whether your cluster contains lock disk triger the below command.

#cmgetconf ( for more details check the man page)

create a ascii file.

To generate the cluster ASCII configuration file for clusterA, and
store the information in clusterA.config, do the following:

cmgetconf -c clusterA clusterA.config

Step 15
Edit clusterA.conf
and change the cluster lock disk and save it

Step 15
To verify the cluster configuration and package files, do the
following:

cmcheckconf -C clusterA.config -P pkg1.config -P pkg2.config


Step 16
Start the cluster and package.



















тАЬEducation is our passport to the future, for tomorrow belongs to those who prepare for it today.тАЭ
Md. Minhaz Khan
Super Advisor

Re: Migrating to New Storage

Dear RajuD

Your action plan is so professional. Thanks a lot to submit this kind of plan in very details.

As mentioned in the above posted solution by Ganesan R in step 6:

>>>6. Remove the device files and unpresent from the old storage<<<

How can we remove old LUN device files from HP host when we vgreduce old LUN from VG?

Is there any command like (insf -e)??

Thanks
Minhaz
johnsonpk
Honored Contributor

Re: Migrating to New Storage

Hi Sial,

How can we remove old LUN device files from HP host when we vgreduce old LUN from VG?

Is there any command like (insf -e)??

Yes .there is ..

use rmsf -H hardware_path

(remember to reduce the disk from vg before doing rmsf)

Thanks!!
Johnson
Steven E. Protter
Exalted Contributor

Re: Migrating to New Storage

Shalom,

Because this is EVA to EVA, you can likely daisy chain the devices and use EVA utilities on the DL380 system that controls the disk array to copy data very quickly.

If not, you can also set up new LUNS on the new array and use mirror-ux (lvextend -m 1) to copy data though this is much slower.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com