1832606 Members
2361 Online
110043 Solutions
New Discussion

Serviceguard migration

 
SOLVED
Go to solution
Brett Kipps
New Member

Serviceguard migration

Hi, hoping someone can please help with the following question:
I currently have the following MC/Serviceguard configuration - 2 nodes running HP-UX 11.00 (SG A.11.09). Running "packageA" on the primary node (failover to the secondary). I wish to migrate the systems to new hardware running HP-UX 11.11 v1 (SG A.11.15), this being a new cluster. What is the correct procedure to shutdown the "packageA" and migrate the existing SAN disk to the new cluster? (We are not joining the 2 clusters together). I need to have a rollback plan also for the disk.
I am ok with the serviceguard configuration migration, just need to know the best way to migrate the disk (and back).
Many thanks for any assistance.
4 REPLIES 4
Patrick Wallek
Honored Contributor
Solution

Re: Serviceguard migration

To migrate the disk I would:

Shutdown packageA on the existing cluster, run a 'vgexport -s -p -v -m vg??.map vg??'. If the cluster is down you may need to reactivate the VG first. Then copy the map file(s) to the new machines.

Remove the disks from the old machine and attach to the new machine.

Then on the new machine create your /dev/vg?? directory and the /dev/vg??/group file (via the mknod command). Then import the VG with 'vgimport -s -v -m vg??.map vg??'. When the import is done you should be able to activate the VG and mount your filesystems.

If you need to go back to the old machine, just unmount the filesystem and deactivate the VG on the new machine. Detach the disks from the new machine and reattach to the old machine, then just reactivate the VG and remount the filesystems.

Good luck.
Steven E. Protter
Exalted Contributor

Re: Serviceguard migration

I think your plan is partially workable.

You may not be able to migrate the existing san disk to the new server without shutting down the whole cluster.

To get away with your disk must be used exlusively by the one package.

Also wondering why you would want to mirgrate the disk around. It ruins the backout plan, risking data corruption.

Possibly better to schedule some downtime and copy the data to a new set of disks that you've tested with the new cluster on say out of date data.

Then you shut down the cluster, copy over the data then bring the new cluster up. If all goes well you minimize downtime and risk of corrupted data.

Also 11.16 SG is stable, why use 11.15?

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Doug O'Leary
Honored Contributor

Re: Serviceguard migration

Hey;

If the two clusters are attached to the same SAN, your job is incredibly easy.

0. MAKE A FULL BACKUP OF ALL DATA!
1. Map the disks to the new cluster nodes
2. vgexport -s -m -m ${map} ${vg}
3. Shut the package down
4. Copy maps over to new cluster nodes
5. ioscan -fH ${hw}; insf -e -H ${hw}
6. Create vgs:
a. mdir /dev/${vg}
b. mknod /dev/${vg}/group c 64 ${minor}
c. vgimport -s -m ${map} ${vg}
7. vgchange -a y ${vg}; verify all your data is there.

You'll have to create the package on the new nodes; however, you say you're comfortable with that, so...

HTH;

Doug

------
Senior UNIX Admin
O'Leary Computers Inc
linkedin: http://www.linkedin.com/dkoleary
Resume: http://www.olearycomputers.com/resume.html
Brett Kipps
New Member

Re: Serviceguard migration

I found these instructions on ITRC:

You need to first make the vg cluster unaware on the one where it is currently running and then import it on the production and make it cluster aware over here.

on test

vgchange -a n /dev/vg_name
vgchange -c n /dev/vg_name
vgexport -p -s -m /tmp/vg_name.map /dev/vg_name

now ftp this map file to the other servers, use binary mode foir ftp transfer.

then on production node, create the vg directory and create the group file

mkdir /dev/vg_name
mknod /dev/vg_name/group c 64 0x0n0000

Now do a vgimport

vgimport -s -m /tmp/vg_name.map /dev/vg_name

Do a vgchange.

vgchange -a n /dev/vg_name

Make it cluster aware on this node.

vgchange -c y /dev/vg_name

Activate it using the exclusive mode.

vgchange -a e /dev/vg_name

You should be able to activate the vg and mount the filesystems for this vg. proceed with the cluster package configuration as required.



Any thoughts on these suggestions??
I guess particularly looking at the correct procedures for "vgchange" involved with moving the disk.
Many thanks.