1834455 Members
2650 Online
110067 Solutions
New Discussion

Re: cmgetconf problem

 
Shah Gaurang B.
Frequent Advisor

cmgetconf problem

Hello all,

When i am applying cmgetconf command for getting information about cluster of 2 nodes an error shows on screen which is

cmgetconf
Warning: Volume group /dev/vgdb01 is configured differently on node node1 than on node node2

Error: Volume group /dev/vgdb01 on node node1 does not appear to have a physical volume corresponding to /dev/dsk/c12t2d0 on node node2 (12022610041134158529).

What could be the reason.

Thanks


8 REPLIES 8
Steven E. Protter
Exalted Contributor

Re: cmgetconf problem

Shalom,

You don't have the same configuration on both nodes in /etc/cmcluster

You need to figure out which one is correct and have the same configuration files on both nodes of the cluster.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Sandman!
Honored Contributor

Re: cmgetconf problem

Hi,

Do a "strings" on your /etc/lvmtab file on both nodes and look for the physical vols that belong to vgdb01. From your post it looks like node1 is either missing a physical volume or node2 has an extraneous one. Run the following commands on both nodes and post them here:

# strings /etc/lvmtab
# vgdisplay -v vgdb01

This is usually caused by adding or removing a physical vol from a clustered VG on the configuration node but NOT doing a vgexport on that node and a vgimport on the other node(s).

~cheers
Darrel Louis
Honored Contributor

Re: cmgetconf problem

Hi,

Is the device (/dev/dsk/c12t2d0 ) on both nodes.
On which node was the VG created?
When the creation was done did you perform a vgexport and distributed the exported config file to the other node and imported it on the node?

Darrel
Shah Gaurang B.
Frequent Advisor

Re: cmgetconf problem

thanks to all for prompt reply.
I had checked in /etc/lvmtab on both node. on database node only one entry shows /dev/dsk/c12t1d4 and on application node two entry shows /dev/dsk/c12t1d4 and /dev/dsk/c12t2d1 . On both node error device file entry i.e /dev/dsk/c12t2d0 is not found .
Darrel Louis
Honored Contributor

Re: cmgetconf problem

Hi,

Does the device file exists?
If yes perform the following:
pvdisplay -v /dev/dsk/c12t2d0
To check to which VG the device belongs to.

Darrel
Victor Fridyev
Honored Contributor

Re: cmgetconf problem

Hi,

According to SG rules, a shared volume group should be built similarly on all nodes, which means that you build such a group on one machine and import it on all other cluster nodes with the same stuff, i.e. on one node:

vgcreate vgshared /dev/dsk/XXXX /dev/dsk/YYYY

on all other nodes:

vgimport vgshared /dev/dsk/XXXX /dev/dsk/YYYY

As you understand, disk names on different nodes may differ.

HTH
Entities are not to be multiplied beyond necessity - RTFM
Sandman!
Honored Contributor

Re: cmgetconf problem

Do you know which node is the configuration node for vgdb01 (the node on which this VG was originally created)?

If it's the database node, then simply vgexport there, copy mapfile to the app node; remove exisiting vgdb01 there followed by a fresh vgcreate of vgdb01 and finally vgimport the mapfile on the app node. The steps are below:

1) on database node...
# vgexport -pvs -m /tmp/vgdb01.map

2) copy mapfile to the app node...
# rcp dbnode:/tmp/vgdb01.map appnode:/tmp/vgdb01.map

3) on app node note the major and minor number of vgdb01...
# ll /dev/vgdb01/group

4) remove vgdb01 on the app node...
# vgexport vgdb01

5) create vgdb01 using the major and minor numbers from step 3...
# mkdir /dev/vgdb01
# mknod /dev/vgdb01 c

6) vgimport the mapfile...
# vgimport -vs -m /tmp/vgdb01.map

7) verify if the VG is consistent on both nodes...
# strings /etc/lvmtab

~hope it helps
Stephen Doud
Honored Contributor

Re: cmgetconf problem

1) verify that the /etc/lvmtab file contains an undated set of VGs on BOTH servers. An out-of-date lvmtab file on one node will cause this error.

2) If this is an 11.23 cluster, make certain that /etc/nsswitch.conf contains the line:
ipnodes: hosts

This permits proper name resolution through the /etc/hosts file (which should be designated as the first hostname lookup path in nsswitch.conf:
hosts files [notfound] dns