Operating System - Linux
1753393 Members
7173 Online
108792 Solutions
New Discussion

Storage migration - LVM Changes?

 
SrinivasanK
Occasional Contributor

Storage migration - LVM Changes?

Hi experts,

In our production systems we are planning to migrate the storage from one array (HDS) to another array (HP P9500) using array based (block by block) replication tool (dont know yet the name of the tool).

 

On the Linux OS (Redhat), what changes we need to make on the LVM so that new storage LUN's is used instead of old LUN's. I understand vgexport and vgimport will be used but I need to know the exact procedures please.

 

Thanks,

Kalyan

4 REPLIES 4
Matti_Kurkela
Honored Contributor

Re: Storage migration - LVM Changes?

I hope you mean RHEL 4 or newer; the advice below is going to be applicable to RHEL 4 and newer only.

 

On HP-UX, the vgexport and vgimport commands would be necessary in a storage migration like the one you're planning. But Linux LVM works very differently from HP-UX, and the Linux vgexport/vgimport commands are not so important.

 

Unlike HP-UX, Linux LVM won't persistently remember the disk devices of each VG. Instead, each time the system boots and each time the system sees new disks, it will automatically detect any visible LVM PVs, and if a complete set of PVs for a VG can be found, that VG is normally activated automatically at boot time.

 

So, as long as your disks are correctly replicated and you can ensure that the Linux systems only see either the old LUNs or the new LUNs (not both at the same time), you won't have to do anything special.

The procedure would be:

  1. shutdown the Linux server(s)
  2. have the storage admins replicate the LUNs (or if this has been done before step 1, have them sync the LUNs one last time to ensure that all the modified data has been replicated)
  3. disconnect the Linux server(s) from the old LUNs
  4. connect them to the new LUNs
  5. start the Linux server(s) and verify that they can see all the new LUNs. If all the LUNs are visible, the Linux LVM will automatically detect the new LUNs and activate them.

If your storage LUNs are completely controlled by LVM, it can be this easy. But if you also have traditional partitions on SAN disks or raw database LUNs, you may need to check for udev rules or /etc/multipath.conf aliases and modify them to match the WWIDs of the new LUNs.

 

Ancient systems with kernel 2.4.* or older (i.e. RHEL 3 or older) have an older LVM version, and very different (and primitive) ways to handle multipathing. With those, you may need to do more manual work, depending on exactly how the system has been configured.

MK
SrinivasanK
Occasional Contributor

Re: Storage migration - LVM Changes?

Thanks MK for the prompt reply.

RHEL version is 5 or above in our environment.

So you mean to say for Linux LVM we need to shutdown the servers, remove the LUN's from the storage array to the server (Zoning removal) and then present the new LUN's from the storage to the old server (verifying after the sync is completed) and then only we can activate the new LUn's in the existing VG. Please confirm.

As per your answer I can see more downtime (storage team zoing removing/adding work) requires for array based replication compared to host based mirroring setup. Are there any other way to avoid much downtime.

 

I have another question what about if the entire storage managed by VxVM? Can the VxVM deport and import sutits my scenario?

 

Thanks,

Kalyan

 

SrinivasanK
Occasional Contributor

Re: Storage migration - LVM Changes?

Hi,
Appreciate if I get a response on this please.
Thanks
Matti_Kurkela
Honored Contributor

Re: Storage migration - LVM Changes?

>So you mean to say for Linux LVM we need to shutdown the servers, remove the LUN's from the storage array to the server (Zoning removal) and then present the new LUN's from the storage to the old server (verifying after the sync is completed) and then only we can activate the new LUn's in the existing VG. Please confirm.

 

Yes. The new LUNs in the new storage will have new WWIDs, so even multipathing is going to recognize them as different from the old ones. But if both new & old LUNs are presented to the server at the same time (with their contents synced), LVM will report "duplicate PV detected" errors.

 

(In HP-UX, the LVM and alternate paths were integrated; in Linux, the multipathing layer is separate from LVM. The multipathing layer is expected to take control of all the multipathed disks and present just one device for each multipathed LUN for the LVM layer.)

 

The amount of downtime required will depend on your storage team: if you have multipathing, perhaps they can disable one of the switch ports connected to your server, then prepare the zoning for the new storage in advance for the port that is disabled?

 

Then, at the switchover time, the sequence would be:

  1. shutdown server(s)
  2. make sure the LUN synchronization is completed
  3. disable the switch port zoned to the old storage
  4. enable the switch port zoned to the new storage
  5. start up server(s) using the new storage (with only one HBA connected to the LUNs)

Then, while the system is running with the new LUNs, the storage team can change the zoning for the port that was disabled in step 3 and re-enable the port. You may have to use the commands listed in RedHat's Online Storage Reconfiguration Guide to have the HBA driver detect the extra paths to the LUNs once the port is activated again, but after that, the multipathing layer should detect and activate the added paths seamlessly.

 

I have no experience on storage migrations with VxVM on Linux; someone else may answer that.

But my first guess would be that you will face a similar issue with it: if both old & new disks are visible to the host simultaneously once they have been synced with the storage-side block copy tool, VxVM would also see two copies of each vxdisk. It would need you to identify which one of each pair to actually use.

MK