HPE EVA Storage
1751814 Members
5829 Online
108781 Solutions
New Discussion

Failed SAN Migration using Powerpath Migration Enabler in MC/ServiceGuard Cluster

 
SOLVED
Go to solution
Dominic Espejo
Occasional Contributor

Failed SAN Migration using Powerpath Migration Enabler in MC/ServiceGuard Cluster

I need help in figuring out a solution for a tricky problem where LUNs in shared Volume Groups is mapped to two different SAN arrays due to a failed SAN migration.  The mapping for the hosts on the old CX3 (source) are suppose to point to the same LUN, but some are now mapped to the VNX5300 (target).  They are not synced so we are now stuck to running our cluster on one node--still using the old SAN.

 

Background:

We were migrating data from an EMC CX3 SAN to a VNX5300 SAN using Powerpath Migration Enabler on a 2-node MC/ServiceGuard cluster.  Our hosts are 2 rx6600 Itaniums running HPUX 11v3.  

 

On one node (other node halted), we were able to sync the I/O's which were being directed to both the CX and VNX arrays.  Before we could commit the I/O to the new SAN one of the packages failed which then broke the sync for the included LVs in that package.  A reboot commited the mapping of some the LVs to the VNX while some stayed on the CX.  The package came up on the other node and that's where its been running until we figure this thing out.  HP support & EMC Support seems to be stuck on how to remove the mapping so we can revert back to the original configuration and try the migration using some other method (pvmove or mirroring).

 

I'm thinking maybe I can do an lvextend mirroring on the affected LUNS from the CX to the VNX, halt the package and bring it up on the other node, break the mirror, and redistribute the VG configuration to the other (working) node.   What steps do I need to take to make this thing work if at all possible?

1 REPLY 1
Dominic Espejo
Occasional Contributor
Solution

Re: Failed SAN Migration using Powerpath Migration Enabler in MC/ServiceGuard Cluster

Here was our solution just in case this ever happens to anyone.

***EMC Host Copy is not compatible with ServiceGuard clusters (verified by EMC Support)***

***Cluster went down due to an I/O error caused by the cluster lock disk failing.  The cluster lock disk was part of the migration.***

 

Solution:

 

create a mirror of the affected logical volumes from <disk1>  to <disk2>

#lvextend -A y -m 1 /dev/vg/lv /dev/dsk/<disk2>

 

delete <disk1> mirror after sync

#lvreduce -A y -m 0 /dev/vg/lv /dev/dsk/<disk1>

 

cleanup Host Copy migration (this destroys data on <disk1>)

#powermig undoredirect -handle <handle>

#powermig cleanup -handle <handle>

 

NOTE:  The "powermig undoredirect"  removes the drive mapping to the VNX and reverts back to  the CX.