Showing results for 
Search instead for 
Did you mean: 

cmapplyconf segfault

Go to solution
Brem Belguebli
Regular Advisor

cmapplyconf segfault

I'm trying to setup a single node cluster for testing with the following conf (i'll add a second node in the future):

BL460 RHEL4u6 x86_64
2 Qlogic HBA
2 Luns from 1 EVA8100
SGLX 11.18 demo

Multipath is managed by DM-MP.
The 2 Luns are mirrored with MD, I have created a vg on top of the MD array: vg01

the cluster starts well :

testlinux up

cmxi1150 up running

hdc0h00 up running

PRIMARY up eth0

My vg01 is not active (vgchange -a n vg01) and it is clustered (vgchange -c y vg01)

When I add the vg to the cluster config file
(VOLUME_GROUP vg01), when running cmapplyconf -C myconf.ascii cmapplyconf segfaultswith the following message :

Volume Group /dev/vg01 is not found on any nodes in the cluster

Am I missing something ?

Below the output a few commands that could help:

1) Multipath
multipath -ll -v2
[size=5 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=8][active]
\_ 1:0:4:4 sdab 65:176 [active][ready]
\_ 1:0:5:4 sdaf 65:240 [active][ready]
\_ 0:0:2:4 sdd 8:48 [active][ready]
\_ 0:0:3:4 sdh 8:112 [active][ready]
\_ 0:0:4:4 sdl 8:176 [active][ready]
\_ 0:0:5:4 sdp 8:240 [active][ready]
\_ 1:0:2:4 sdt 65:48 [active][ready]
\_ 1:0:3:4 sdx 65:112 [active][ready]

[size=5 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=8][active]
\_ 1:0:4:3 sdaa 65:160 [active][ready]
\_ 1:0:5:3 sdae 65:224 [active][ready]
\_ 0:0:2:3 sdc 8:32 [active][ready]
\_ 0:0:3:3 sdg 8:96 [active][ready]
\_ 0:0:4:3 sdk 8:160 [active][ready]
\_ 0:0:5:3 sdo 8:224 [active][ready]
\_ 1:0:2:3 sds 65:32 [active][ready]
\_ 1:0:3:3 sdw 65:96 [active][ready]


mdadm -D /dev/md1
Version : 00.90.01
Creation Time : Thu Aug 14 12:24:42 2008
Raid Level : raid1
Array Size : 5242816 (5.00 GiB 5.37 GB)
Device Size : 5242816 (5.00 GiB 5.37 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Fri Aug 22 14:26:12 2008
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : e2c1ec78:adead4fd:1d3f013f:90cb3b5d
Events : 0.30

Number Major Minor RaidDevice State
0 253 1 0 active sync /dev/dm-1
1 253 0 1 active sync /dev/dm-0

3) vgdisplay
vgdisplay (before vgchange -a n vg01 and vgchange -c y vg01)

vgdisplay -v vg01
Using volume group(s) on command line
Finding volume group "vg01"
--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 19
VG Access read/write
VG Status resizable
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.00 GB
PE Size 4.00 MB
Total PE 1279
Alloc PE / Size 1024 / 4.00 GB
Free PE / Size 255 / 1020.00 MB
VG UUID 8gvNmU-lPLb-0nHH-VlWh-xjKT-NzuZ-7GxHyi

--- Logical volume ---
LV Name /dev/vg01/lvol0
VG Name vg01
LV UUID Z6yUqC-pYQP-PNex-Rd6D-GHkz-zT2f-kvtEle
LV Write Access read/write
LV Status NOT available
LV Size 4.00 GB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors 0

--- Physical volumes ---
PV Name /dev/md1
PV UUID Y1ilXH-ur3g-ykiV-gNBC-gx4y-VnM2-601nfm
PV Status allocatable
Total PE / Free PE 1279 / 255


Honored Contributor

Re: cmapplyconf segfault

You said:
"My vg01 is not active (vgchange -a n vg01) and it is clustered (vgchange -c y vg01)"

If you're familiar with ServiceGuard on HP-UX, this can be a bit of a trap: the "vgchange -c y" command is *not* used with ServiceGuard on Linux. It is for RHEL AS native clustering support only.

If you've set a VG to cluster mode without first setting up the _RedHat cluster_ locking daemons, you'll need these special instructions for undoing your mistake:

"In order to fix this, edit the /etc/lvm/lvm.conf file and set

locking_type = 0

Then run the command

vgchange -cn VolumeGroupName.

After this, change the locking_type in the /etc/lvm/lvm.conf back to the original value."

Because ServiceGuard/Linux must work on many Linux distributions and some of them does not have the native cluster support that RHEL does, ServiceGuard does its own VG locking in a different way.

Brem Belguebli
Regular Advisor

Re: cmapplyconf segfault


Thanks for our reply.

Indeed, it's something I'm used to do on HP-UX.

I'll try it and repost the result.

Brem Belguebli
Regular Advisor

Re: cmapplyconf segfault


I may be trying to go the wrong way.

Do the VG's belonging to the packages necessarly need to be defined in the global cluster conf file (in my case cmclconfig.ascii).

This is something that we are used to do on our HP-UX MCSG setup.

I've tried without vgchange'ing -c y vg01, it still segfaults.

In any case, the vg's need to be inactive (vgchange -a n), but do they need to be put in exclusive mode (vgchange -a e vg01)?

Honored Contributor

Re: cmapplyconf segfault

No, you should not be using "vgchange -a e" either.

(In HP-UX, "vgchange -a e" is a cluster-aware form of "vgchange -a y". If either is used, the volume group will be _activated_. When using the "-a e" form, the cluster locking system first confirms that no other cluster node is using that VG; if that confirmation cannot be obtained, the VG remains inactive on the node that runs the command.)

Although ServiceGuard for Linux has the same cm* command syntax as the HP-UX version, the Linux LVM is very different from HP-UX LVM and there are some significant changes in SG/Linux because of this.

Please refer to the "Managing ServiceGuard for Linux" manual when building the package LVM configuration. Don't just assume that anything you learned with HP-UX LVM works as-is.

Off the top of my head, the main differences are:
- the cluster VG locking is performed as a separate step using "vgchange --addtag" and "vgchange --deltag", instead of automatically with "vgchange -a e"

- when replicating the cluster VG configuration from one node to another, there is no need to use vgexport/vgimport and move map files around: just using "vgscan" on the node that needs to pick up the new VG configuration is enough. (Note: while "vgscan" may be a scary operation on HP-UX in some situations, it's very safe in Linux.)

Brem Belguebli
Regular Advisor

Re: cmapplyconf segfault

Thanks again,

Ok, understood.

The tag thing doesn't really lock the VG, it's only serviceguard that won't activate it if the tag from another node in on.

But nothing prevents you from manually activating the tagged VG on another node.


Brem Belguebli
Regular Advisor

Re: cmapplyconf segfault

My remaining question is about the need to define the volume groups in the global cluster conf file .

Is it necessary ?

I could make it work without doing so
Brem Belguebli
Regular Advisor

Re: cmapplyconf segfault


vgscan is not even necessary on the other nodes, at least when combined with XDC.
I haven't tested without XDC.

Declaring the volume groups (VOLUME_GROUP directive) in the global cluster ascii file doesn't seems either to be necessary, as it is with SG/HPUX.

I guess it is due tho the fact that the VG's are not cluster aware (LVM2 is not).