1833694 Members
3387 Online
110062 Solutions
New Discussion

Re: sg problem

 
AnthonySN
Respected Contributor

sg problem

2 node (hpux 11.31) cluster mcsg(11.18) having common storage msa30 jbod (6X300GB) 3 disks in each pvg group used for mirroring.
on node2 we can see only 3 disks hence pkg was not starting since quorum disk was not available so we did a vgchange -a y -q n vgdb, the pkg started but now when we move back to node1 the pkg is not starting and hangs, pkg log mentions activating vgdb in exclusive option. when i try to do a vgchange -c y vgdb it says Cannot lock "/etc/lvmconf/lvm_lock" still trying ..............
now when we shutdown node1 and try to again start pkg on node2 both the nodes gives a crash dump. any advice.
2 REPLIES 2
Mike Chisholm
Advisor

Re: sg problem

It sounds like you have a number of problems here. First things first, you must sort out your hardware and storage issues before continuing on with the Serviceguard configuation. You will only end in frustration if you try to do this any other way. Both nodes must be able to activate the shared VG with simply a "vgchange -a y vgdb" command. The SG cluster does not even need to be running to test this and the pkg certainly should not be. You should only activate it on one node at a time, then deactivate before testing the other node. Until this works, do not go any further.

Regarding the MSA30, a couple of notes:
* Only the MSA30 MI is supported for use as a shared disk with Serviceguard as it is the only MSA30 model that supported Multiple SCSI Initiators. The MSA30 SB (Single Bus) and MSA30 DB (Dual Bus) are not supported for use with shared SCSI buses. http://h18000.www1.hp.com/products/quickspecs/11967_na/11967_na.HTML
* There are 2 separate SCSI buses (Bus A and Bus B) in the MSA30 MI, each with 2 Auto-terminating SCSI ports, and each bus has 7 disk mechanisms. It is not possible to bridge the 2 SCSI buses in the MSA30 MI, since the disks on each bus use the same SCSI IDs.
* The MSA30 MI can be connected in High Availability configurations for 2 nodes only. Each node has one SCSI connection to each bus in the MSA30 MI and software mirroring (via MirrorDisk/UX or VxVM mirroring) is used to mirror data between disks on the two buses.

Once you get the storage issues sorted out, you can bring up the cluster(but not the pkg), and clusterize the VG with the vgchange -c y command you mentioned above and then try the starting the pkg. Usually you get the "Cannot lock..." message when multiple LVM commands are running at the same time on a system. There is a lock to coordinate them to prevent simultaneous access to certain operations. Check "ps -ef" to see if there are any hung commands, maybe this is related to your problems of both nodes not seeing all the disks. Make sure you have disabled AUTO_VG_ACTIVATE in /etc/lvmrc on both nodes. Use the SG manual, it is very informative. http://docs.hp.com/en/ha.html
AnthonySN
Respected Contributor

Re: sg problem

the problem was resolved after changing the MI card.
but we had to do a vgchange -c y -q y vgdata to actually start the cluster else it was again hanging.