Operating System - Linux
1753307 Members
6405 Online
108792 Solutions
New Discussion

Re: LVM configuration at DR server

 
vijay alur alur
Frequent Advisor

LVM configuration at DR server

Hello All,

 

Need your help in below situation.

 

We have two Red hat Linux server version 6.3, one at primary site and another at DR site. both using LVM2. The applicatin data VG's at production site server is replicated at the DR server disks in SAN. So ideally the VG's at DR site server are not activated. we would activate that only during the time of DR or during drills.

 

So i want to know how can i update the LVM configuration at the DR server. i regularly do changes at production server like new LV and mount point addition, adding PV in VG etc... . From SAN point of view, what ever PV's are added or removed from production site are also reflected at DR site. but the corresponding LVM configuration that are changes at production server are not updated at DR server LVM configuration.

 

So the DR application data disks are updated with production server data. The question i have is how can i make the DR server to reflect the LVM changes that are done at production server LVM. So that in case when we have to enable DR server we do not need to have LVM configuration mismatch.

 

Regards,

VJ

Lead Engineer, IMS.
iGATE
4 REPLIES 4
Matti_Kurkela
Honored Contributor

Re: LVM configuration at DR server

First, scan the HBAs at the DR server to ensure all new disks presented to the DR server are visible:

# echo "- - -" > /sys/class/scsi_host/host<host_number>/scan

 (run the above command for each <host_number> that exists on your system.)

If your SAN is configured 100% correctly and your FC drivers are up to date, new disks may be detected automatically as they are presented, so scanning might not be needed... but it does not hurt.

 

If you use dm-multipath, then run "multipath" to ensure that multipath devices have been created to all applicable SAN disks (udev should do this automatically when a new /dev/sd* device appears, but doing it manually does not hurt...).

 

Then, run "vgscan". This is probably the most important step. Udev will do this automatically too, but usually when the disks are presented to host(s) they are uninitialized, so the automatic vgscan at detection time has probably ignored your new PVs as they were uninitialized at that point.  After that, you've probably run pvcreate and vgcreate/vgextend on the production server: until you run "vgscan" on the DR server, the it will be unaware of those changes.

 

Now, your LVM configuration information is up to date.

(If you have made changes to /etc/lvm/lvm.conf on the production server, you should make sure to update it on the DR server too.)

 

Linux LVM is designed to allow hot-plugging, so running "vgscan" at any time is safe.

MK
vijay alur alur
Frequent Advisor

Re: LVM configuration at DR server

Helo Matti,

 

Thanks for the response.

 

So in my setup, the DR copies of disks are already presented to the DR server. But VG's belonging to the replicated disks at DR are exported.

 

So i think i am left with running only vgscan command to make the LVM configuration at the DR server up-to-date.

 

/etc/lvm/lvm.conf looks good on both the server.

 

I am usually worked on LVM for HPUX platforms. so here i do not have to share the map file across the servers.

 

Regards,

VJ

Lead Engineer, IMS.
iGATE
Matti_Kurkela
Honored Contributor

Re: LVM configuration at DR server

> But VG's belonging to the replicated disks at DR are exported.

 

Be careful: exporting a volume group means a very different thing in Linux LVM than in HP-UX LVM.

 

In the Linux world, exporting a VG is only useful if you are going to move a VG to another system and don't know if it already has another VG with the same name as the VG you're moving. It marks the VG on disk as non-activatable, until the mark is removed with the vgimport command.

 

In HP-UX, vgexport only affects the /etc/lvmtab and the currently running LVM configuration: it does not touch the VG on disk at all. In Linux, this is not true.

 

If your disk replication between production and DR is two-way, and you have exported the replicated VGs on the DR server, the next time you boot your production server, you might see the VGs marked as "exported" in production too. If the replication is one-way, you may find that the export mark you created with the DR server did not "stick".

 

Instead, you have the option of using the "volume_list" setting in /etc/lvm/lvm.conf on the DR server to protect the replicated VGs from accidental activation. You can either explicitly list the VGs you want normally activated on the DR server, or you can configure a tag that must be placed on any VG before it can be activated on the DR server.

 

If you use an explicit list of VGs, you must edit (or comment out) the list before activating the DR environment in a disaster situation.

If you use tags, you'll need to use "vgchange --addtag <your tag> <VG name>" before the VG can be activated in the DR environment. After the disaster situation is over, "vgchange --deltag <your tag> <VG name>" can be used to restore protection.

In both cases, you should make sure the necessary steps are included in the DR site activation plan.

 

> So i think i am left with running only vgscan command to make the LVM configuration at the DR server up-to-date.

> ...so here i do not have to share the map file across the servers.

 

Exactly.

MK
vijay alur alur
Frequent Advisor

Re: LVM configuration at DR server

Hello Matti,

 

We did a successfull DR test. I noticed that after the storage did a sync, the vgs were again activated. i had to manually activate the LV's and mount the file systems at DR server.

 

So i guess after the syncing of replicated disks also activates the replicated VG's at DR. i did not have to run vgscan as well.

 

we have one way replication i.e. From production servers to DR server only.

 

i have not sure how a two way DR replication works.

 

i noticed that after the sync the VG's get activated by default. i think i will make use of the volume_list in /etc/lvm/lvm.conf OR the tag option to keep the VG from getting accidentally activated. Thanks for introducing me to this option.

 

 

 

Thanks!

VJ

Lead Engineer, IMS.
iGATE