- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- cmcheckconf fails
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2007 10:45 PM
05-24-2007 10:45 PM
cmcheckconf fails
I have a rather strange problem. I want to add a new node to the currently running cluster but are unable to do so. cmcheckconf exits with the following (cmcheckconf: Unable to reconcile configuration file nnm_nnoc_cluster.ascii
with discovered configuration information.)
Below is the output from cmcheckconf.
noc-fail[427] /etc/cmcluster # cmcheckconf -k -v -C nnm_nnoc_cluster.ascii
Checking cluster file: nnm_nnoc_cluster.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 4 devices on node noc-tan0
Found 10 devices on node noc-fail
Found 4 devices on node noc-sin0
Found 4 devices on node noc-cos0
Found 4 devices on node noc-wifi
Found 2 devices on node noc-ipn0
Found 0 devices on node noc-tin0
Analysis of 28 devices should take approximately 3 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 2 volume groups on node noc-tan0
Found 5 volume groups on node noc-fail
Found 2 volume groups on node noc-sin0
Found 2 volume groups on node noc-cos0
Found 2 volume groups on node noc-wifi
Found 1 volume groups on node noc-ipn0
Found 0 volume groups on node noc-tin0
Analysis of 14 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-tan0 noc-fail
The other VG is configured on nodes:
noc-sin0
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-tan0 noc-fail
The other VG is configured on nodes:
noc-cos0
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-tan0 noc-fail
The other VG is configured on nodes:
noc-wifi
Found two volume groups with the same name /dev/vg02 but different ids:
One of the two VGs is configured on nodes:
noc-tan0
The other VG is configured on nodes:
noc-fail noc-cos0
Found two volume groups with the same name /dev/vg02 but different ids:
One of the two VGs is configured on nodes:
noc-fail noc-cos0
The other VG is configured on nodes:
noc-tan0
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-sin0
The other VG is configured on nodes:
noc-tan0 noc-fail
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-cos0
The other VG is configured on nodes:
noc-tan0 noc-fail
Found two volume groups with the same name /dev/vg01 but different ids:
One of the two VGs is configured on nodes:
noc-wifi
The other VG is configured on nodes:
noc-tan0 noc-fail
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
cmcheckconf: Unable to reconcile configuration file nnm_nnoc_cluster.ascii
with discovered configuration information.
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2007 11:24 PM
05-24-2007 11:24 PM
Re: cmcheckconf fails
=> cmgetconf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-30-2007 07:01 PM
05-30-2007 07:01 PM
Re: cmcheckconf fails
HP Support should be able to give you a program to make the contents of /etc/lvmtab readable and to verify this. There is also a program to read out the LVM header of the PV and to verify the PVIDs and VGIDs stored on the disks.
Carsten
In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move. -- HhGttG
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-30-2007 09:15 PM
05-30-2007 09:15 PM
Re: cmcheckconf fails
Is your vg01 at noc-tan2 same with vg01 at noc-fail?
for service guard configuration, you must have disk share at both server. you must do vgexport and vgimport. not create volume group at both server. just one volumegroup at 1 server, and vgimport at another server.
i think your step wrong to create volume group
thanks
Freddy
freddy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-31-2007 08:08 AM
05-31-2007 08:08 AM
Re: cmcheckconf fails
# vgexport -pvs -m vg01.map /dev/vg01
# cat
repeat for vg01 on the other node.
If you find the VGID that is listed at the top of the file to be different, it's a sure bet that you didn't vgimport vg01 on one of the nodes, but created it from a new set of disks.
To overcome the problem, decide which vg01 you want to keep, and vgexport the other one one the other node.
Clear the old VGID off of the exported disks using this command on each of the rdsk special files for the exported disks:
# pvcreate -f
Now on the node where vg01 is still in /etc/lvmtab (use strings to see it).
Copy the map file you created on that node to the other node. It's ASCII, so you can paste it into a file on the other node.
Next, create the /dev/vg01 directory and create the /dev/vg01/group file similar to the other node.
Finally, vgimport vg01 using:
# vgimport -vs -m
Then try the Serviceguard configuration commands.