- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Failed SAN Migration using Powerpath Migration Ena...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-22-2014 10:27 AM - edited 09-22-2014 10:28 AM
09-22-2014 10:27 AM - edited 09-22-2014 10:28 AM
I need help in figuring out a solution for a tricky problem where LUNs in shared Volume Groups is mapped to two different SAN arrays due to a failed SAN migration. The mapping for the hosts on the old CX3 (source) are suppose to point to the same LUN, but some are now mapped to the VNX5300 (target). They are not synced so we are now stuck to running our cluster on one node--still using the old SAN.
Background:
We were migrating data from an EMC CX3 SAN to a VNX5300 SAN using Powerpath Migration Enabler on a 2-node MC/ServiceGuard cluster. Our hosts are 2 rx6600 Itaniums running HPUX 11v3.
On one node (other node halted), we were able to sync the I/O's which were being directed to both the CX and VNX arrays. Before we could commit the I/O to the new SAN one of the packages failed which then broke the sync for the included LVs in that package. A reboot commited the mapping of some the LVs to the VNX while some stayed on the CX. The package came up on the other node and that's where its been running until we figure this thing out. HP support & EMC Support seems to be stuck on how to remove the mapping so we can revert back to the original configuration and try the migration using some other method (pvmove or mirroring).
I'm thinking maybe I can do an lvextend mirroring on the affected LUNS from the CX to the VNX, halt the package and bring it up on the other node, break the mirror, and redistribute the VG configuration to the other (working) node. What steps do I need to take to make this thing work if at all possible?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2014 03:38 PM - edited 10-29-2014 05:55 PM
10-29-2014 03:38 PM - edited 10-29-2014 05:55 PM
SolutionHere was our solution just in case this ever happens to anyone.
***EMC Host Copy is not compatible with ServiceGuard clusters (verified by EMC Support)***
***Cluster went down due to an I/O error caused by the cluster lock disk failing. The cluster lock disk was part of the migration.***
Solution:
create a mirror of the affected logical volumes from <disk1> to <disk2>
#lvextend -A y -m 1 /dev/vg/lv /dev/dsk/<disk2>
delete <disk1> mirror after sync
#lvreduce -A y -m 0 /dev/vg/lv /dev/dsk/<disk1>
cleanup Host Copy migration (this destroys data on <disk1>)
#powermig undoredirect -handle <handle>
#powermig cleanup -handle <handle>
NOTE: The "powermig undoredirect" removes the drive mapping to the VNX and reverts back to the CX.