Operating System - Linux
1752623 Members
4227 Online
108788 Solutions
New Discussion

Re: Shared Storage LVM Problem

 
srinivasT
Occasional Contributor

Shared Storage LVM Problem

Hi Guys,

 

              We had two server node1 and node2 , This two servers are presenting with same storage disks.

              On Node1 there is 1 VG (vgdata_node1) this vg presented and work fine. We did VG Export Node1.

              on Node2 we imported the VG(vgdata_node1), The VG (vgdata_node1) is imported on Node2.

 

             There is issue some of the Node2 disk need to be presented to ASM (Oracle). here is the problem we are facing  those disk are  present with VG (VG_data1) .

             I tried to remove the VG from Node2! But thing is same is happening on Other server. Like if i use vgrename on node2 same hipping on node1.  I am shocked why this happeing. If I remove on Node1 VG same VG get removed on Node2

           

            Could please any one  guide me how to remove VG on Node2 With out effecting on Node1.

          

1 REPLY 1
Matti_Kurkela
Honored Contributor

Re: Shared Storage LVM Problem

You've now learned that Linux LVM does not work the same as HP-UX LVM, although many of the commands are named the same. In particular, the VG detection and VG import/export logic are very different between HP-UX and Linux.

 

In HP-UX, the vgexport and vgimport commands only modify the active LVM configuration on the local node: they won't touch the actual VG at all. In Linux, this is not true: when you export a Linux VG, the VG on the disk is flagged as "this VG is currently in exported state". Since the flag is stored on the disk, all the nodes the disk is presented to will see it. As a result, the Linux vgexport operation cannot be used for the purpose of restricting a particular node from accessing the LVs... because such restriction will always affect all nodes.

 

Unlike HP-UX, Linux also auto-detects new VGs without any input from the system administrator. When the system is booting, as long as there are no problems detected with the VGs, all the auto-detected VGs will also usually be activated automatically (the startup scripts will run commands similar to "vgscan", then "vgchange -ay" very early in the boot sequence.) There is no persistent LVM configuration storage on the Linux system disk like /etc/lvmtab on HP-UX; each time "vgscan" is run, a new LVM configuration is built up from scratch and then non-destructively merged with the active configuration in kernel memory (i.e. any VGs or LVs that are currently in use will *not* be removed from the active LVM configuration nor changed, unless you explicitly force that to happen).

 

This also means that vgexport/vgimport is not actually required when presenting a VG to a different node in Linux. In fact, the *only* situation where Linux vgexport is useful is when you are moving disks from one server to another and are uncertain whether a VG with the same name already exists on the destination server. In that case, having the moved VG in exported state avoids the VG naming conflict: the system will simply ignore the exported VG until you use vgrename to resolve the conflict and then use vgimport to remove the export flag.

 

If the two nodes are running clvmd (i.e. they are members of a RedHat Cluster or similar), then all LVM operations on shared VGs are *automatically* executed in a synchronized fashion on both nodes, unless you explicitly request the operation to be done on local node only.

 

To deactivate a clustered VG "vgdata_node1" on Node2, run:

vgchange -aln vgdata_node1

 However, this will not be persistent: if someone runs "vgchange -ay" or reboots the node, the VG will be activated again automatically.

 

To prevent the VG from being activated on Node2, you can edit /etc/lvm/lvm.conf on Node2.

In the /etc/lvm/lvm.conf file, in the "activation {" section, you will find a commented-out setting named "volume_list".

If you uncomment it and use it to list all the VGs you want activated, and leave the vgdata_node1 VG out of the list, then there will be no way to activate that VG on Node2 until you edit /etc/lvm/lvm.conf again. This will stop both automatic activation at boot time, *and* manual activation with the vgchange command.

 

(It will also be very confusing to someone who is not aware of the change in /etc/lvm/lvm.conf. Guess why I know this :-)

 

MK