- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: FIRST_CLUSTER_LOCK_PV and issues with Lock dev...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 07:47 AM
01-23-2002 07:47 AM
FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
I was hoping someone with lots of MC/SG expertise would answer this.
In a 2 node cluster, can we have multipe lock PV's? what are the advantages and dis-advantages?
A 2 node cluster failed and here is the answer I got from a sys-admin.
"The inability of PROD1(server) to obtain the cluster lock disk was due to the fact that the only path to that disk was through an odd director(adapter)(EMC symmetrix). Device c2t0d3 was part of a volume group and protected by pvlinks. The cluster lock device is accessed at the device file level below the lvm layer, thus negating any alternate paths for that device"
Is this true, if so any way to negate this in future.
thanks in advance
jithu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 08:09 AM
01-23-2002 08:09 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
so you cannot see this disk via the alternate link.
You can configure 2 Cluster lock disc maximum, but this is generally inadvisable for various reasons, and is normally only done when using Campus Cluster comnfiguration, or when the single cluster lock disc is powered from the same source as one of the nodes.
A more important question is to discover and rectify why the system could not see the disc via the correct path.
HTH
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 08:12 AM
01-23-2002 08:12 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
It is just quatation from MS/SG manual:
A dual lock disk does not provide a redundant cluster lock. In fact, the dual lock is a compound lock. This means that two disks must be available at cluster formation time rather than the one that is needed for a single lock disk. Thus, the only recommended usage of the dual cluster lock is when the single cluster lock cannot be isolated at the time of a failure from exactly one half of the cluster nodes. If one of the dual lock disks fails, ServiceGuard will detect this when it carries out periodic checking, and it will write a message to the syslog file. After the loss of one of the lock disks, the failure of a cluster node could cause the cluster to go down.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 08:14 AM
01-23-2002 08:14 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
A typical MC/ServiceGuard setup is fully comprised of redundant components.
Your SA is precisly correct in his "negating" statement. We specify the lock disk by it's device file and it will be accessed only through the device file. It will not look for the alternate link in case of link failure.
You can configure two lock disks. However, HP strongly recommends to have a single lock disk wherever is possible. If you are planning to configure two locks, you need to have them seen on two seperate controllers.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 08:33 AM
01-23-2002 08:33 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
Have you think in using RAID?, in this way you can acces to the lock disk altough one disk were broken.
Using two lock disks is only recommended in some special situations. For example a campus cluster or nodes only with internal disks( a power cut in the node with the lock disk would imply a TC in the other node because it can??t access to the lock disk). If you use two lock disk you can find in a situation in which the heartbeat between the nodes is lost but the two nodes are up, each node would take a lock disk and both nodes would be working at the same time.
Best regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 08:36 AM
01-23-2002 08:36 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
http://docs.hp.com/hpux/ha/index.html#ServiceGuard%20OPS%20Edition%20(MC/LockManager).
Did the cluster fail coz of tie-breaker???
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 10:30 AM
01-23-2002 10:30 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
node1 is the primary, node1 has access to the cluster lock disk via Host bus adapter(HBA) which is connected to the odd numbered fibre adapter(FA) on EMC symmetrix. when the FA on Symmetrix failed, node1 had excessive I/O errors even though all the
disks have PVLINKS via even numbered FA's. but the lock disk was on the odd FA adapter(it had pvinks, but no use in this case), then node1 was shut and node2 was brought-up, node couldn't get
access to the lock disk and
hence the cluster failed.
would 2 lock disks, one on even FA and another on odd numbered FA solve this problem in future??
TIA
jithu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2002 12:52 AM
01-24-2002 12:52 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
You also run the remote risk of encountering split-brain syndrome in this configuration, so you haveto take that into account.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-25-2002 07:05 AM
01-25-2002 07:05 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
TIA
jithu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-25-2002 07:31 AM
01-25-2002 07:31 AM
Re: FIRST_CLUSTER_LOCK_PV and issues with Lock device in MC/SG
This is why HP use cluster lock disks in MCSG, as when all network connections between two nodes fail (either cos the network is really down, or cos one machines power supply has failed), a race for the cluster lock disk establishes ownership of the clusters packages, and causes the losing node to do a TOC to avoid data corruption.
Why does two cluster lock disks increase the chance of split brain syndrome? Well a situation could arise where both nodes win a race to different cluster locks, and both think they own the application.
If you really want to avoid the situations brought on by having only one cluster lock (like the problem you suffered), then you should look at implmenting an arbitrator node instead of using cluster locks. This requires another physically seperated node on seperate power supplies from the live nodes, but which has connections to networks used by the other nodes.
HTH
Duncan
I am an HPE Employee
