HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Serviceguard
cancel
Showing results for 
Search instead for 
Did you mean: 

Creation of Logical Volume fails. Lv inactive.

 
SOLVED
Go to solution
estonolose
Advisor

Creation of Logical Volume fails. Lv inactive.

I'm having problems with the creation of a logical volume. When I create it the status is inactive. I have tried to activate it with vgchange -a y and with lvchange -ay ant it doesn't works.

This is the commands that I execute (pvcreate and vgextend work fine):


Code:

pvcreate /dev/emcpowerh1

vgextend VolGroup00 /dev/emcpowerh1

lvcreate VolGroup00 -n LogVol10 -L 2G
Failed to activate new LV.

This server forms a part of a cluster of two nodes with ServiceGuard. Can be this the problem?. Have I to change something in the cluster?.


The OS is RedHat 4.

Thanks
26 REPLIES
estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.

This logical volume is to increase the swap in this node. I don't want to share this volume between two nodes.
smatador
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

Hi,
check to activate vg with tag on linux with sg.
http://docs.hp.com/en/B9903-90050/ch05s06.html
smatador
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.

I have seen this link and I have the parameter tags { hosttags = 1 } in /etc/lvm/lvm.conf file.

If I have understood it well this parameter is fundamental for the cluster and I can't to erase. Then, How can I add more disk to increase the swap?.

Thanks.
smatador
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

Could you post vgdisplay -v VolGroup00 ?
smatador
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

Hi,
Have you try the extending swap process instead?
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/s1-swap-adding.html.

The tags { hosttags = 1 } statement in the /etc/lvm/lvm.conf file is a Volume Group (VG) activation Protection.
This mechanism is required in SGLX to limit access to LVM Logical Volumes to only one node in the cluster.
Perhaps, you could try to disable it just to do your lvcreate and after that enable it quickly.
Matti_Kurkela
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

HP's instructions for enabling the VG Activation Protection with ServiceGuard installation seem to assume that LVM is used *only* for cluster VGs.

As you have noticed, it causes extra complications when you attempt to create a VG for local use on one of the nodes.

Please see this older thread:

http://forums.itrc.hp.com/service/forums/questionanswer.do?threadId=1129836

If you are using LVM on your system disk (indicated by the name VolGroup00, which is the default VG name used by the RedHat installer), the situation is extra tricky.

When the system is booted, the VolGroup00 is activated using the mini-root environment stored in the initrd file. At that point, the system does not yet know it should require a host tag on the VGs, so the activation of VolGroup00 is successful.

After that, the root filesystem is mounted. This includes /etc/lvm/lvm.conf, so after this point the system knows that host tags are required, and will not allow the activation of any more LVs unless the VG has the required tag. This is what prevents you from adding any new LVs to VolGroup00.

MK
MK
Serviceguard for Linux
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

Also remember, when you enable hosttags, you need to set them to activate a VG

vgchange --addtag $(uname -n) vgpkgA
vgs -o +tags vgpkgA
vgchange -a y vgpkgA

Similarly, you need to delete the tag after deactivating teh VG
estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.

I have change the name of VG. Now is VolGroup01 but the problem is the same.


vgdisplay -v VolGroup01
Using volume group(s) on command line
Finding volume group "VolGroup01"
--- Volume group ---
VG Name VolGroup01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 30.00 GB
PE Size 4.00 MB
Total PE 7679
Alloc PE / Size 512 / 2.00 GB
Free PE / Size 7167 / 28.00 GB
VG UUID jeV3Z8-KAWV-3I1O-2ek2-lhca-ANjQ-M3lW3q

--- Logical volume ---
LV Name /dev/VolGroup01/LogVol10
VG Name VolGroup01
LV UUID WQQGex-ptYM-0Cp2-dkZj-VnUz-fvuN-X7Mrky
LV Write Access read/write
LV Status NOT available
LV Size 2.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 0

--- Physical volumes ---
PV Name /dev/emcpowerh1
PV UUID 3JLFnn-OLm6-EJFr-jyNy-YFfw-nXS6-bN1oin
PV Status allocatable
Total PE / Free PE 7679 / 7167
Ivan Ferreira
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

Recently, I had the same problem, but with a Red Hat Cluster.

I was able to create and activate the volume by running:

lvmconf --disable-cluster
lvcreate...
lvmconf --enable-cluster
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Serviceguard for Linux
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

The "enable-cluster" won't work (or do anything) with SGLX
Brem Belguebli
Regular Advisor

Re: Creation of Logical Volume fails. Lv inactive.

Hi,

If hosttags are activated, you need to create an additionnal file /etc/lvm/lvm_`hostname`.conf in which you must enter something like what follows:

activation { volume_list = ["VolGroup00", "@yourhostname" ] }

Replace yourhostname by your real hostname ;-)
That will make LVM able to activate VolGroup00 and all the VG's under cluster control that are implicitely tagged with the hostname on which the VG's are supposed to be activated.

Of course, if you need to add local VG's that will not be part of the cluster, let's say VolGroup01, you'll need to add it in the same file with the same syntax as Volgroup00 (without the @).
Serviceguard for Linux
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

If you have not done so already, please review the installation white paper http://docs.hp.com/en/14117/sglx.deployment.guide.pdf This has an example of the use of hosttags.
estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.

The file /etc/lvm/lvm_`hostname`.conf already existed. This is the file.

nodo2:/etc/lvm> more lvm_nodo2.conf
activation { volume_list=["@nodo2"] }

This node have an Oracle database running. Can I add the the VolGroup01 with the cluster (and database) running?.

Is it necessary to add VolGroup00 (system volume) if I don't want to create logical volumes in VolGroup00?

This is the line to add

activation { volume_list = ["VolGroup01", "@nodo2" ] }

Is it ok?


Thanks.
Serviceguard for Linux
Honored Contributor

Re: Creation of Logical Volume fails. Lv inactive.

If you are using exclusive activation with SGLX you should NOT be activating volumes yourself except for development purposes. SGLX does the activation and deactivation when packages are started and stopped.

Also, what version of RH4 are you using. I cannot remember when hosttags was added to RH4 and when it became fully supported.
Brem Belguebli
Regular Advisor

Re: Creation of Logical Volume fails. Lv inactive.

Hi,
As long as VolGroup00 or whatever doesn't belong to a clustered VG, VolGroup00 looking as a default installation system VG, you can add them in the activation list.

Be carefull never to add a clustered VG in the activation list, as they are supposed to be managed thru tags by ServiceGuard, thus managed thru the @`hostname`directive.

Tags were added in 4u4 if I remember well.
estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.


The OS is Red Hat Enterprise Linux AS release 4 Update 4.


The vg that I want to activate is on a new disk of 30Gb that I have configured with lvm.

pvcreate /dev/emcpowerh1
vgextend VolGroup01 /dev/emcpowerh1
lvcreate VolGroup01 -n LogVol10 -L 2G

If I understand that you say I can add the line

activation { volume_list = ["VolGroup01", "@nodo2" ] }

to the file /etc/lvm/lvm_`hostname`.conf without problems (the disk is new).

Can I change this file with the cluster working or I have to stop it. (This is a production system with an Oracle database working, and I have to be very careful)

Thanks.

Brem Belguebli
Regular Advisor
Solution

Re: Creation of Logical Volume fails. Lv inactive.

Hi,

The question is :

Is VolGroup01 part of any package hosted by the cluster ?
If so, you must not add it this way in the lvm_`hostname`.conf.

You have to add it into the package configuration (control file if legacy package, ascii file if modular) and elt serviceguard manage it.

If the package to which it belongs is already running and you cannot wait, you must manually tag it, activate it and mount it (mount in case you want to use it as a FS):
# vgchange --addtag `hostname` VolGroup01
# vgchange -a y VolGroup01
# mkfs.ext3 /dev/VolGroup01/LogVol10
# mount /dev/VolGroup01/LogVol01 /yourmountpoint.

In case the VG is not part of any package, so not managed by serviceguard (local use only), you can add it as mentionned in my previous post (activation { volume_list = ["VolGroup01", "@nodo2" ] } )

Is it clear ?
estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.


Very clear. The Vg is not part of any package. This VG will be part of swap of the node. I want to add more swap to the existent.

Only one important question. I can't to make the change without this confirmation. Can I change this file with the cluster working or I have to stop it. (This is a production system with an Oracle database working, and I have to be very careful).

Thanks






Brem Belguebli
Regular Advisor

Re: Creation of Logical Volume fails. Lv inactive.

Hi,

I did it live on servers with a lot of newly added memory on which I had to had a swap device.

The only difference is that the new disk wasn't a SAN Lun but a VG on local disks.

Yes you can do it live, modify the lvm_`hostname`.conf, and then vgchange -a y your_Volume.

Brem Belguebli
Regular Advisor

Re: Creation of Logical Volume fails. Lv inactive.

HI,

In order to make you more confident on the operation, you could try it on a developpement/test server, if you have some.

We tend to always have dev/test servers with almost the same setup as the prod ones to be able to troubleshoot or to run upgrades before the prod.
estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.

Hi,

I have added the VolGroup01 and I can activate the lv. The problem is that all lv that I have created in nodo2 appear in node1 (inactive). Can I hide the new logical volumes to the node1?. I want to they be local to the node.

Thanks

estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.

I have trying to activate the VolGroup01 with vgchange -aly but the logical volumes are visibles in two nodes.

I want to add swap in both nodes and then I will have to create a VolGroup01 in node1 an VolGroup02 in node2 with LogVol101 in node1 and LogVol201 in node1.

Is It the unique option?. Or can I hide the lv of node1 to the node2. I don't want to do and lvscan in node1 and that it appears the logical volumes of node2.

Thanks.


estonolose
Advisor

Re: Creation of Logical Volume fails. Lv inactive.

Finally I have been created the swap in node2. Now I'm trying to make the same in node1.

The first problem is that I make one partition in new disk and then I do an ls /dev/emsc* and in node1 I have emscpoweri1. If I do the same in node2 I don't have the /dev/emscpoweri1 device but yes the emscpoweri.

(node1)
root@nodo1:/root> ls /dev/emc*
...
/dev/emcpowerh /dev/emcpoweri
/dev/emcpowerh1 /dev/emcpoweri1


(node2)
root@nodo2:/root> ls /dev/emc*
...
/dev/emcpowerh /dev/emcpoweri
/dev/emcpowerh1


The partition on node2 /dev/emscpowerh1 exists in node1 an node2.

Why I can't to see /dev/emscpoweri1 device on node2 but I can see the /dev/emscpowerh1 in node1?


Regards,