1751691 Members
4748 Online
108781 Solutions
New Discussion юеВ

cmcheckconf fails

 
Daniel Fourie
Frequent Advisor

cmcheckconf fails

Hi

I have a rather strange problem. I want to add a new node to the currently running cluster but are unable to do so. cmcheckconf exits with the following (cmcheckconf: Unable to reconcile configuration file nnm_nnoc_cluster.ascii
with discovered configuration information.)

Below is the output from cmcheckconf.

noc-fail[427] /etc/cmcluster # cmcheckconf -k -v -C nnm_nnoc_cluster.ascii
Checking cluster file: nnm_nnoc_cluster.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 4 devices on node noc-tan0
Found 10 devices on node noc-fail
Found 4 devices on node noc-sin0
Found 4 devices on node noc-cos0
Found 4 devices on node noc-wifi
Found 2 devices on node noc-ipn0
Found 0 devices on node noc-tin0
Analysis of 28 devices should take approximately 3 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node noc-tan0
Found 5 volume groups on node noc-fail
Found 2 volume groups on node noc-sin0
Found 2 volume groups on node noc-cos0
Found 2 volume groups on node noc-wifi
Found 1 volume groups on node noc-ipn0
Found 0 volume groups on node noc-tin0
Analysis of 14 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-tan0 noc-fail
The other VG is configured on nodes:
noc-sin0
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-tan0 noc-fail
The other VG is configured on nodes:
noc-cos0
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-tan0 noc-fail
The other VG is configured on nodes:
noc-wifi
Found two volume groups with the same name /dev/vg02 but different ids:
One of the two VGs is configured on nodes:
noc-tan0
The other VG is configured on nodes:
noc-fail noc-cos0
Found two volume groups with the same name /dev/vg02 but different ids:
One of the two VGs is configured on nodes:
noc-fail noc-cos0
The other VG is configured on nodes:
noc-tan0
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-sin0
The other VG is configured on nodes:
noc-tan0 noc-fail
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-cos0
The other VG is configured on nodes:
noc-tan0 noc-fail
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-wifi
The other VG is configured on nodes:
noc-tan0 noc-fail
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
cmcheckconf: Unable to reconcile configuration file nnm_nnoc_cluster.ascii
with discovered configuration information.

Regards
Knowlage is Power
4 REPLIES 4
Luk Vandenbussche
Honored Contributor

Re: cmcheckconf fails

Try, to capture the configuration of your current cluster setup

=> cmgetconf
Carsten Krege
Honored Contributor

Re: cmcheckconf fails

You obviously have /dev/vg01 configured in /etc/lvmtab on both nodes. The volume group ids are different. This wouldn't be necessarily a problem because this is normal for VGs that are private to the node (like vg00). But then I assume that the physical volumes (PV) in vg01 have the same physical volume id and for SG this would mean that this is a VG on the shared bus, hence the PV on node1 that has the same PVID then the PV on node2 should be in a VG with the same name and the same VGID.

HP Support should be able to give you a program to make the contents of /etc/lvmtab readable and to verify this. There is also a program to read out the LVM header of the PV and to verify the PVIDs and VGIDs stored on the disks.

Carsten
-------------------------------------------------------------------------------------------------
In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move. -- HhGttG
freddy_21
Respected Contributor

Re: cmcheckconf fails

Hello,

Is your vg01 at noc-tan2 same with vg01 at noc-fail?

for service guard configuration, you must have disk share at both server. you must do vgexport and vgimport. not create volume group at both server. just one volumegroup at 1 server, and vgimport at another server.

i think your step wrong to create volume group

thanks
Freddy
freddy
Stephen Doud
Honored Contributor

Re: cmcheckconf fails

You can determine the VGID of a volume group rather easily using these commands:

# vgexport -pvs -m vg01.map /dev/vg01
# cat .map

repeat for vg01 on the other node.

If you find the VGID that is listed at the top of the file to be different, it's a sure bet that you didn't vgimport vg01 on one of the nodes, but created it from a new set of disks.

To overcome the problem, decide which vg01 you want to keep, and vgexport the other one one the other node.
Clear the old VGID off of the exported disks using this command on each of the rdsk special files for the exported disks:
# pvcreate -f

Now on the node where vg01 is still in /etc/lvmtab (use strings to see it).
Copy the map file you created on that node to the other node. It's ASCII, so you can paste it into a file on the other node.

Next, create the /dev/vg01 directory and create the /dev/vg01/group file similar to the other node.

Finally, vgimport vg01 using:
# vgimport -vs -m /dev/vg01

Then try the Serviceguard configuration commands.