HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: sg problem
Operating System - HP-UX
1833694
Members
3387
Online
110062
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-30-2010 01:40 AM
03-30-2010 01:40 AM
sg problem
2 node (hpux 11.31) cluster mcsg(11.18) having common storage msa30 jbod (6X300GB) 3 disks in each pvg group used for mirroring.
on node2 we can see only 3 disks hence pkg was not starting since quorum disk was not available so we did a vgchange -a y -q n vgdb, the pkg started but now when we move back to node1 the pkg is not starting and hangs, pkg log mentions activating vgdb in exclusive option. when i try to do a vgchange -c y vgdb it says Cannot lock "/etc/lvmconf/lvm_lock" still trying ..............
now when we shutdown node1 and try to again start pkg on node2 both the nodes gives a crash dump. any advice.
on node2 we can see only 3 disks hence pkg was not starting since quorum disk was not available so we did a vgchange -a y -q n vgdb, the pkg started but now when we move back to node1 the pkg is not starting and hangs, pkg log mentions activating vgdb in exclusive option. when i try to do a vgchange -c y vgdb it says Cannot lock "/etc/lvmconf/lvm_lock" still trying ..............
now when we shutdown node1 and try to again start pkg on node2 both the nodes gives a crash dump. any advice.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-02-2010 05:13 AM
04-02-2010 05:13 AM
Re: sg problem
It sounds like you have a number of problems here. First things first, you must sort out your hardware and storage issues before continuing on with the Serviceguard configuation. You will only end in frustration if you try to do this any other way. Both nodes must be able to activate the shared VG with simply a "vgchange -a y vgdb" command. The SG cluster does not even need to be running to test this and the pkg certainly should not be. You should only activate it on one node at a time, then deactivate before testing the other node. Until this works, do not go any further.
Regarding the MSA30, a couple of notes:
* Only the MSA30 MI is supported for use as a shared disk with Serviceguard as it is the only MSA30 model that supported Multiple SCSI Initiators. The MSA30 SB (Single Bus) and MSA30 DB (Dual Bus) are not supported for use with shared SCSI buses. http://h18000.www1.hp.com/products/quickspecs/11967_na/11967_na.HTML
* There are 2 separate SCSI buses (Bus A and Bus B) in the MSA30 MI, each with 2 Auto-terminating SCSI ports, and each bus has 7 disk mechanisms. It is not possible to bridge the 2 SCSI buses in the MSA30 MI, since the disks on each bus use the same SCSI IDs.
* The MSA30 MI can be connected in High Availability configurations for 2 nodes only. Each node has one SCSI connection to each bus in the MSA30 MI and software mirroring (via MirrorDisk/UX or VxVM mirroring) is used to mirror data between disks on the two buses.
Once you get the storage issues sorted out, you can bring up the cluster(but not the pkg), and clusterize the VG with the vgchange -c y command you mentioned above and then try the starting the pkg. Usually you get the "Cannot lock..." message when multiple LVM commands are running at the same time on a system. There is a lock to coordinate them to prevent simultaneous access to certain operations. Check "ps -ef" to see if there are any hung commands, maybe this is related to your problems of both nodes not seeing all the disks. Make sure you have disabled AUTO_VG_ACTIVATE in /etc/lvmrc on both nodes. Use the SG manual, it is very informative. http://docs.hp.com/en/ha.html
Regarding the MSA30, a couple of notes:
* Only the MSA30 MI is supported for use as a shared disk with Serviceguard as it is the only MSA30 model that supported Multiple SCSI Initiators. The MSA30 SB (Single Bus) and MSA30 DB (Dual Bus) are not supported for use with shared SCSI buses. http://h18000.www1.hp.com/products/quickspecs/11967_na/11967_na.HTML
* There are 2 separate SCSI buses (Bus A and Bus B) in the MSA30 MI, each with 2 Auto-terminating SCSI ports, and each bus has 7 disk mechanisms. It is not possible to bridge the 2 SCSI buses in the MSA30 MI, since the disks on each bus use the same SCSI IDs.
* The MSA30 MI can be connected in High Availability configurations for 2 nodes only. Each node has one SCSI connection to each bus in the MSA30 MI and software mirroring (via MirrorDisk/UX or VxVM mirroring) is used to mirror data between disks on the two buses.
Once you get the storage issues sorted out, you can bring up the cluster(but not the pkg), and clusterize the VG with the vgchange -c y command you mentioned above and then try the starting the pkg. Usually you get the "Cannot lock..." message when multiple LVM commands are running at the same time on a system. There is a lock to coordinate them to prevent simultaneous access to certain operations. Check "ps -ef" to see if there are any hung commands, maybe this is related to your problems of both nodes not seeing all the disks. Make sure you have disabled AUTO_VG_ACTIVATE in /etc/lvmrc on both nodes. Use the SG manual, it is very informative. http://docs.hp.com/en/ha.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-12-2010 05:38 AM
04-12-2010 05:38 AM
Re: sg problem
the problem was resolved after changing the MI card.
but we had to do a vgchange -c y -q y vgdata to actually start the cluster else it was again hanging.
but we had to do a vgchange -c y -q y vgdata to actually start the cluster else it was again hanging.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP