HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Legacy
- >
- HPE 9000 and HPE e3000 Servers
- >
- cmcluster High Abalability
HPE 9000 and HPE e3000 Servers
1827250
Members
2689
Online
109716
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2003 07:37 AM
09-12-2003 07:37 AM
cmcluster High Abalability
please send me information about this errors i have two hp-9000 k570 in a Cluster of MC-Service Guard. The system did not go away down and either was shutdown or reboot.
Your aid is thanked
______________________________________________
Sep 11 13:47:08 mvi902 cmcld[2438]: Communication to node mvi903 has been interrupted
Sep 11 13:47:08 mvi902 cmcld[2438]: Node mvi903 may have died
Sep 11 13:47:08 mvi902 cmcld[2438]: Attempting to form a new cluster
Sep 11 13:47:13 mvi902 cmcld[2438]: 2 nodes have formed a new cluster, sequence#101
Sep 11 13:47:13 mvi902 cmcld[2438]: The new active cluster membership is: mvi903(id=2), mvi902(id=1)
Sep 11 13:47:15 mvi902 vmunix: mpc_bindlwp: Overriding conflicting mandatory binding!
Sep 11 13:47:15 mvi902 vmunix: mpc_bindlwp: Migrating process 491 from processor 1 to processor 0!
Sep 11 13:47:39 mvi902 vmunix: mpc_bindlwp: Migrating process 499 from processor 2 to processor 0!
Sep 11 13:47:39 mvi902 vmunix: mpc_bindlwp: Overriding conflicting mandatory binding!
Sep 11 13:47:40 mvi902 above message repeats 2 times
Sep 11 13:47:39 mvi902 vmunix: mpc_bindlwp: Migrating process 499 from processor 2 to processor 0!
Sep 11 13:47:49 mvi902 vmunix: mpc_bindlwp: Overriding conflicting mandatory binding!
Sep 11 13:47:49 mvi902 vmunix: mpc_bindlwp: Migrating process 567 from processor 3 to processor 0!
Sep 11 16:20:25 mvi902 cmcld[2438]: Communication to node mvi903 has been interrupted
Sep 11 16:20:25 mvi902 cmcld[2438]: Node mvi903 may have died
Sep 11 16:20:25 mvi902 cmcld[2438]: Attempting to form a new cluster
Sep 11 16:20:29 mvi902 cmcld[2438]: Obtaining Cluster Lock
Sep 11 16:20:30 mvi902 cmcld[2438]: Turning off safety time protection since the cluster
Sep 11 16:20:30 mvi902 cmcld[2438]: now consists of a single node. If ServiceGuard
Sep 11 16:20:30 mvi902 cmcld[2438]: fails, this node will not automatically halt
Sep 11 16:20:32 mvi902 cmcld[2438]: Attempting to adjust cluster membership
Sep 11 16:20:35 mvi902 cmcld[2438]: Enabling safety time protection
Sep 11 16:20:35 mvi902 cmcld[2438]: Clearing Cluster Lock
Sep 11 16:20:37 mvi902 cmcld[2438]: Timed out node mvi903.
Sep 11 16:20:37 mvi902 cmcld[2438]: Attempting to adjust cluster membership
Sep 11 16:20:41 mvi902 cmcld[2438]: Clearing Cluster Lock
Sep 11 16:20:46 mvi902 cmcld[2438]: 2 nodes have formed a new cluster, sequence #104
Sep 11 16:20:46 mvi902 cmcld[2438]: The new active cluster membership is: mvi902 (id=1), mvi903(id=2)
Your aid is thanked
______________________________________________
Sep 11 13:47:08 mvi902 cmcld[2438]: Communication to node mvi903 has been interrupted
Sep 11 13:47:08 mvi902 cmcld[2438]: Node mvi903 may have died
Sep 11 13:47:08 mvi902 cmcld[2438]: Attempting to form a new cluster
Sep 11 13:47:13 mvi902 cmcld[2438]: 2 nodes have formed a new cluster, sequence#101
Sep 11 13:47:13 mvi902 cmcld[2438]: The new active cluster membership is: mvi903(id=2), mvi902(id=1)
Sep 11 13:47:15 mvi902 vmunix: mpc_bindlwp: Overriding conflicting mandatory binding!
Sep 11 13:47:15 mvi902 vmunix: mpc_bindlwp: Migrating process 491 from processor 1 to processor 0!
Sep 11 13:47:39 mvi902 vmunix: mpc_bindlwp: Migrating process 499 from processor 2 to processor 0!
Sep 11 13:47:39 mvi902 vmunix: mpc_bindlwp: Overriding conflicting mandatory binding!
Sep 11 13:47:40 mvi902 above message repeats 2 times
Sep 11 13:47:39 mvi902 vmunix: mpc_bindlwp: Migrating process 499 from processor 2 to processor 0!
Sep 11 13:47:49 mvi902 vmunix: mpc_bindlwp: Overriding conflicting mandatory binding!
Sep 11 13:47:49 mvi902 vmunix: mpc_bindlwp: Migrating process 567 from processor 3 to processor 0!
Sep 11 16:20:25 mvi902 cmcld[2438]: Communication to node mvi903 has been interrupted
Sep 11 16:20:25 mvi902 cmcld[2438]: Node mvi903 may have died
Sep 11 16:20:25 mvi902 cmcld[2438]: Attempting to form a new cluster
Sep 11 16:20:29 mvi902 cmcld[2438]: Obtaining Cluster Lock
Sep 11 16:20:30 mvi902 cmcld[2438]: Turning off safety time protection since the cluster
Sep 11 16:20:30 mvi902 cmcld[2438]: now consists of a single node. If ServiceGuard
Sep 11 16:20:30 mvi902 cmcld[2438]: fails, this node will not automatically halt
Sep 11 16:20:32 mvi902 cmcld[2438]: Attempting to adjust cluster membership
Sep 11 16:20:35 mvi902 cmcld[2438]: Enabling safety time protection
Sep 11 16:20:35 mvi902 cmcld[2438]: Clearing Cluster Lock
Sep 11 16:20:37 mvi902 cmcld[2438]: Timed out node mvi903.
Sep 11 16:20:37 mvi902 cmcld[2438]: Attempting to adjust cluster membership
Sep 11 16:20:41 mvi902 cmcld[2438]: Clearing Cluster Lock
Sep 11 16:20:46 mvi902 cmcld[2438]: 2 nodes have formed a new cluster, sequence #104
Sep 11 16:20:46 mvi902 cmcld[2438]: The new active cluster membership is: mvi902 (id=1), mvi903(id=2)
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-15-2003 11:44 PM
09-15-2003 11:44 PM
Re: cmcluster High Abalability
Post this in the "System Administration" category of this forum, there you may get answers from people doing system administration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-17-2003 01:51 AM
09-17-2003 01:51 AM
Re: cmcluster High Abalability
Hi
Looks like the heartbeat is not stable b/w cluster nodes.
There must be two reasons for this
1.node timeout is set too low, check the
NODETIMEOUT parameter in /etc/cluster/cmclconf.ascii and if it is 2000000 [2 sec ] increase it to 5 - 8 secs
2.check the connectivity reliablity of heartbeat b/w nodes
Regds
Ruban
Looks like the heartbeat is not stable b/w cluster nodes.
There must be two reasons for this
1.node timeout is set too low, check the
NODETIMEOUT parameter in /etc/cluster/cmclconf.ascii and if it is 2000000 [2 sec ] increase it to 5 - 8 secs
2.check the connectivity reliablity of heartbeat b/w nodes
Regds
Ruban
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2003 04:03 AM
09-19-2003 04:03 AM
Re: cmcluster High Abalability
Alberto,
as stated in the previous post, if this happens frequently and/or your network gets clogged occasionally, and/or you have only 10MBit interfaces, you should set the the NODE_TIMEOUT in your cluster to something like 8-10 Seconds (at least 5).
8 second would be a value of "8000000"
It *may* help.
Second, the other message with the mpc_binding is someting that - for HP-UX 11.0 - should be resolved with the base patch PHKL_18543. So you might consider installing a couple of patches.
Regards,
Bernhard
as stated in the previous post, if this happens frequently and/or your network gets clogged occasionally, and/or you have only 10MBit interfaces, you should set the the NODE_TIMEOUT in your cluster to something like 8-10 Seconds (at least 5).
8 second would be a value of "8000000"
It *may* help.
Second, the other message with the mpc_binding is someting that - for HP-UX 11.0 - should be resolved with the base patch PHKL_18543. So you might consider installing a couple of patches.
Regards,
Bernhard
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP