Operating System - HP-UX
1826909 Members
3060 Online
109705 Solutions
New Discussion

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

 
SOLVED
Go to solution
TMcB
Super Advisor

Looking for some advice on migrating SAN disks in a MCSG environment.

Hi everyone,
I'm looking for some advice on migrating SAN disks in a MCSG environment.

We have a 2 node cluster on HPUX 11.11 running MCSG 11.16 and the package includes an Informix database. We currently use SecurePath for multipathing, but I believe we will use SDD once we go to IBM.

Disks are currently on HP EVAs and mirrored using MirrorDisk.
We will be migrating to IBM DS4800 SAN.

On Node1

1. get SAN team to present our new disks and install SDD drive onto server
2. run ioscan and insf –e
3. shutdown MCSG
4. add new disk into the VG and mirror
pvcreate -f /dev/rdsk/newdisk1
pvcreate -f /dev/rdsk/newdisk2
vgextend /dev/vgsgm /dev/rdsk/newdisk1
vgextend /dev/vgsgm /dev/rdsk/newdisk2
lvextend -m 3 /dev/vgsgm/lvusr1
5. Once mirroring complete , remove EVA disks from VG
lvreduce –m2 /dev/vgsgm/lvusr1 /dev/dsk/oldEVAdisk1
lvreduce –m1 /dev/vgsgm/lvusr1 /dev/dsk/oldEVAdisk2
vgreduce –f /dev/vgsgm /dev/dsk/oldEVAdisk1
vgreduce –f /dev/vgsgm /dev/dsk/oldEVAdisk1
6. disconnect the EVA fibres

On Node 2
Repeat steps above (apart from pvcreate)

When finished uninstall SecurePath on both nodes and restart MCSG

Few Questions :
- Can I have both the IBM disks and the HP disks presented at the same time?

- What confusion will this cause between SecurePath and SDD?

- do I need to shutdown MCSG before doing the mirroring, and keep it down on both nodes until the exercise is complete?

- On the 2nd node – do I have to use any of the vgimport commands originally used to configured the VGs from a mapfile,
eg vgimport -v -m /tmp/vgsgm.mapfile /dev/vgsgm /dev/dsk/newdisk1 /dev/dsk/newdisk2

Any advice on this would be greatly appreciated.
Thanks
10 REPLIES 10
Steven E. Protter
Exalted Contributor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

Shalom,

-mixed presentation: Yes this should be possible but your volume group may not have the capacity to present that many disks and may need to be rebuilt prior to staring the process.

-Shutdown MCSG? No. Back up the data before you start? Yes.

-Your vgimport command should work.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
TMcB
Super Advisor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

Hi

The VG should have enough capacity :
Max PV 16
Cur PV 2

I was really just wondering if I'm missing something crucial - I'm not too sure!
Unfortunately we dont have a test cluster - all our other nodes are stand alone.

Thanks

Stephen Doud
Honored Contributor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

If you have access to the ITRC knowledge database, check out this document:

HP-UX ServiceGuard Software - Migrating Package Data From One Array To Another
(ID: emr_na-c01228863-1)
TMcB
Super Advisor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

thanks Stephen
thats exactly what I'm looking for.
TMcB
Super Advisor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

Hi
after reading above document, I'm still concerned about what to do for the 2nd node.
(our 2 nodes see the disks as different devices).

the document says for step 8 :
8. Update /etc/lvmtab on the unactive node
vgexport â pvs â m /etc/lvmconf/map.vg01 /dev/vg01
vgexport /dev/vg01
mkdir /dev/vg01
mknod /dev/vgs01/group c 64 0x010000
vgimport â vs â m /etc/lvmconbf/map.vg01 /dev/vg01
Repeat for each LV

I'm confused as to what this is doing here - is just overwriting the current config (by running 'mkdir /dev/vg01')?

Also, how will it know on the 2nd node what disks are in this volume group seeing is the 2nd node sees different /dev/disk files for each disk?

Do I not need to run the vgextend / vgreduce and lvextend / lvreduce commands on the unactive node?

Thanks very much
likid0
Honored Contributor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

the document says for step 8 :
8. Update /etc/lvmtab on the unactive node
vgexport â  pvs â  m /etc/lvmconf/map.vg01 /dev/vg01
vgexport /dev/vg01
mkdir /dev/vg01
mknod /dev/vgs01/group c 64 0x010000
vgimport â  vs â  m /etc/lvmconbf/map.vg01 /dev/vg01
Repeat for each LV
---------------------------------------
You have to update de lvmtab on the unactive node ONCE you have finished mirroring on the active node,once you only have the DS disks on the system after mirroring, you can go to the other node and do as it says
-------------------------------------------

I'm confused as to what this is doing here - is just overwriting the current config (by running 'mkdir /dev/vg01')?
-------------------------------------------
you launch this on the active node and it creates a map file with a vgid
vgexport â  pvs â  m /etc/lvmconf/map.vg01 /dev/vg01
then you copy the map file to the other node and import.
-------------------------------------------


Also, how will it know on the 2nd node what disks are in this volume group seeing is the 2nd node sees different /dev/disk files for each disk?
-------------------------------------
On the active node when you do the export with -p(preview) and -s it copies the vgid on the mapfile, thats how it knows what disk belog to that vg
--------------------------------------

Do I not need to run the vgextend / vgreduce and lvextend / lvreduce commands on the unactive node?

---------------------------
Nope, no way, in the unactive node, you only have to do export/import once you have finished
Windows?, no thanks
TMcB
Super Advisor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

Thanks very much - but how does the unactive node know which disks are in the VG01 -

On node 1 our disks are for example,
c34t0d0
c34t0d2
c34t0d3
c34t0d4
c34t0d5

And on node 2 they are
c36t0d0
c36t0d2
c36t0d1
c36t0d3
c36t0d4

Using the instructions, I have told node 1 that the c34 disks aare in VG01.
But where do I tell node 2 not to use c34xxxx but use c36xxxx disks.

Also, why run #mkdir vg01 on node2. It already exists before the migration.

Thanks very much
Robin T. Slotten
Trusted Contributor
Solution

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

The second node will know which devices go with which VG vi the serial number that is put into the "map".

on Node1:
/usr/sbin/vgexport -v -p -s -m /tmp/${PKG}.map ${PKG}

On Node2:
/usr/sbin/vgimport -v -s -m /tmp/${PKG}.map /dev/${PKG}

I have attached a script that I used on a 4 node cluster to export and import the VG's
Rob...
IF you do it more than twice, write a script.
Stephen Doud
Honored Contributor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

When you create a VG map file using the [vgexport] -s option, the VG's unique VGID is placed at the top of the map file. This becomes very useful when the vgimport is performed. When vgimport is used with the -s option, LVM will look for the VGID in the top line of the map file, then it will scan all disks in the backplane, and any that have a matching VGID are loaded into /etc/lvmtab for that VG.
The plus here - you won't need to know the device file name.
How will you know if all of the disks (or disk paths, when alternate links are involved) are loaded?
Count them.
If they are a multiple of the correct number of disks (vgdisplay on the node where the VG is active) will show CUR PV = ), then you know they are present and accounted for.
So vgexport and vgimport actually updates /etc/lvmtab with a current list of all disks that the VG owns at the present time. This is a process which is often overlooked, when an administrator grows a VG on one node but forgets to update /etc/lvmtab on the other node(s).

TMcB
Super Advisor

Re: Looking for some advice on migrating SAN disks in a MCSG environment.

thaks very much - all is now clear.