1851044 Members
2495 Online
104056 Solutions
New Discussion

Re: Service Guard Error

 
nibble
Super Advisor

Service Guard Error

guys,

A. there has been a change in the vglock drive due to the movement of physical drives in our SAN.
the previous drive is /dev/dsk/c1t0d0, and the new drive is /dev/dsk/c2t0d0.
Is it safe to do just edit the cluster config file and simply change the value of the FIRST_CLUSTER_LOCK_PV to /dev/dsk/c2t0d0?
what are the other steps aside from this? after editing the file, we feel that we need to restart the cluster but we're reluctant to do it.


B. Also we receive the following errors in the cmcheckconf


Error: Unable to determine a unique identifier for physical volume /dev/dsk/c2t0d5 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c3t0d5 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c2t0d3 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c3t0d3 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c3t0d4 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c2t0d4 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c3t0d7 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c2t0d7 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c2t1d0 on node A. Use pvcreate to give the disk an identifier.
Error: Unable to determine a unique identifier for physical volume /dev/dsk/c3t1d0 on node A. Use pvcreate to give the disk an identifier.

Error: Volume group /dev/vgdelta on node A does not appear to have a physical volume corresponding to /dev/dsk/c3t0d4 on node B (11424746661199062570).
Error: Volume group /dev/vgdelta on node A does not appear to have a physical volume corresponding to /dev/dsk/c2t0d4 on node B (11424746661199062570).
Error: Volume group /dev/vglock on node A does not appear to have a physical volume corresponding to /dev/dsk/c2t1d0 on node B (11424746661204678766).
Error: Volume group /dev/vglock on node A does not appear to have a physical volume corresponding to /dev/dsk/c3t1d0 on node B (11424746661204678766).
Error: Volume group /dev/vgnovem on node A does not appear to have a physical volume corresponding to /dev/dsk/c2t0d5 on node B (11424746661204160020).
Error: Volume group /dev/vgnovem on node A does not appear to have a physical volume corresponding to /dev/dsk/c3t0d5 on node B (11424746661204160020).
Error: Volume group /dev/vgoscar on node A does not appear to have a physical volume corresponding to /dev/dsk/c2t0d3 on node B (11434746661174539068).
Error: Volume group /dev/vgoscar on node A does not appear to have a physical volume corresponding to /dev/dsk/c3t0d3 on node B (11434746661174539068).
Error: Volume group /dev/vgpapa on node A does not appear to have a physical volume corresponding to /dev/dsk/c3t0d7 on node B (11434746661204503032).
Error: Volume group /dev/vgpapa on node A does not appear to have a physical volume corresponding to /dev/dsk/c2t0d7 on node B (11434746661204503032).

cmcheckconf : Unable to reconcile configuration file /etc/cmcluster/nodeA.conf
with discovered configuration information.

Begin cluster verification...

Note: Disks were discovered which are not in use by either LVM or VxVM.
Use pvcreate(1M) to initialize a disk for LVM or,
use vxdiskadm(1M) to initialize a disk for VxVM.
Warning: Volume group /dev/vgdelta is configured differently on node B than on node A
Warning: Volume group /dev/vgdelta is configured differently on node A than on node B
Warning: Volume group /dev/vglock is configured differently on node B than on node A
Warning: Volume group /dev/vglock is configured differently on node A than on node B
Warning: Volume group /dev/vgnovem is configured differently on node B than on node A
Warning: Volume group /dev/vgnovem is configured differently on node A than on node B
Warning: Volume group /dev/vgoscar is configured differently on node B than on node A
Warning: Volume group /dev/vgoscar is configured differently on node A than on node B
Warning: Volume group /dev/vgpapa is configured differently on node B than on node A
Warning: Volume group /dev/vgpapa is configured differently on node A than on node B



Guys, any idea on this? im stil new with serviceguard and kinda lost on reading a lot of things. is there any other steps that i need to do for troubleshooting? any files i need to look at or any logs? any help will be appreciated. thanks.
7 REPLIES 7
Mridul Shrivastava
Honored Contributor

Re: Service Guard Error

There appears to be lot of LVM errors and you need to fix them first. I guess device files got changed for other PVs as well and that is the reason you are getting lots of errors for these VGs.

You need to know the corresponding PVs for all the VGs and then go for vgexport and import using the new device files.
Time has a wonderful way of weeding out the trivial
melvyn burnard
Honored Contributor

Re: Service Guard Error

First you need to fix those errors!
Obviously something drastic changed when the "move" occurred, qand unless you sort these out, you are asking for problmes down the line.
Secondly, you can edit the cluster ascii file, but you then need to halt the cluster, vgchange -c n and vgchange -a y on the cluster lock vg, then use cmapplyconf to apply the changes, followed by a vgchange -a n on the cluster lock vg, then restart your cluster
My house is the bank's, my money the wife's, But my opinions belong to me, not HP!
Salah BOUFARGUINE
Occasional Advisor

Re: Service Guard Error

Hello
Just you execute: pvcreate /dev/rdsk/c2t0d0
regards
Stephen Doud
Honored Contributor

Re: Service Guard Error

From the messages, it is evident that the /etc/lvmtab files are out of sync between cluster nodes.
The remedy is to update /etc/lvmtab using vgexport and vgimport in a particular way. If the /etc/lvmtab is not up to date on the server where the package is running, it too should be updated. This will require that the package volume group(s) be deactivated prior to the vgexpor and re-import.

I have attached a script that will do the update for you. Please read the comments to insure you use it properly.
skt_skt
Honored Contributor

Re: Service Guard Error

fix all other Vgs by doing vgexport/import process; Also if see any error for the Vgs further consider fixinx the /etc/lvmtab (using vgscan; depending on the error too).

If you are not able to take a down time for the entire cluster and only one node is effected then a work around would be creating the missing /dev/dsk/c1t0d0 with the same major and minor of /dev/dsk/c2t0d0(which is cluster lock pv/LUN).

This would help to bring up the cluster and i assume only one node effected with the movement you mentioned.
nibble
Super Advisor

Re: Service Guard Error

guys,
which command should i use in doing further investigations before doing the vgexport/import? I've viewed both the lvmtab of the nodes and they are already synchronized but still, got the same errors when i issued the cmgetconf on both nodes. I cant log a call to HP since we dont have support.

also, just in case the vglock has been corrupted and not recoverable, is it safe to recreate the vglock?
Stephen Doud
Honored Contributor

Re: Service Guard Error

If /etc/lvmtab is not out of sync, then we need to account for messages such as, "Error: Volume group /dev/vgdelta on node A does not appear to have a physical volume corresponding to /dev/dsk/c3t0d4 on node B"

In a Serviceguard environment, when a cmquerycl, cmcheckconf, cmapplyconf or cmgetconf are performed, the disks listed in /etc/lvmtab on both nodes are compared to one another, checking the unique PVID - to insure that each disk in the VG is listed in /etc/lvmtab on both nodes. If one is missing, then vgactivation could result in a subset of disks, so the SG command would fail.
If you have re-imported vgdelta on the node where the VG is not active and still get the error, reimport the VG on the other node. If after that, you still see the errors, it's possible that one or more of the disks are not shared (not seen by both nodes, ie, not zoned).