- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- cluster error
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2010 05:03 PM
тАО03-02-2010 05:03 PM
cluster error
bugcheck code = 000005DC :cluexit, node voluntarily existing vms cluster.
please advise what would be wrong
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2010 05:41 PM
тАО03-02-2010 05:41 PM
Re: cluster error
Assuming you're able to boot both nodes without networking, this probably means there is no quorum disk and expected votes isn't appropriately configured. If this is the case:
When you boot the two nodes without network connectivity and with votes allowing this configuration, each node will form an instance of the cluster. You have two instances of the same cluster running. When you connect the networking cable and the nodes see each other, one node opts to to exit the cluster gracefully.
Solution, plug in the networking cables and boot the second node.
Long term solution. Review VOTES and EXPECTED_VOTES. What storage is in place, a quorum disk is generally used with two nodes in a cluster to allow the cluster to continue with only one node available. With two clustered nodes, a cross over cable in a second network interface allows for uninterrupted cluster communications when networking support wants to upgrade their switch firmware or test redundant power supplies. . .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2010 05:43 PM
тАО03-02-2010 05:43 PM
Re: cluster error
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2010 11:55 PM
тАО03-02-2010 11:55 PM
Re: cluster error
I concur with Andy. The original post does not mention if the storage is shared between the systems or if the storage is local.
If the storage is shared, booting two non-communicating nodes will likely lead to corruption of the data stored on the disks. DO NOT boot cluster members if their communications to the cluster is disconnected, but the connection to shared storage is working. It is a recipe for severe problems.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-03-2010 04:14 AM
тАО03-03-2010 04:14 AM
Re: cluster error
You don't mention what hardware the systems are.
My guess (and it is a guess since I've no idea on the environment, similar to that of the disk device name question that you had) is that the two nodes want to be part of the same cluster but wb3 and wb4 have booted independently and you're now seeking to join them into the same cluster from a fully booted state. This won't work.
VMSclusters rely on shared views of data structures across all of the nodes in the cluster. If a node goes away for a period of time and then seeks to rejoin, it will crash out and rejoin the cluster from a known state (i.e. rebooting) so that it can build its views of the shared data structures.
Similarly, if you boot two nodes for the same VMScluster separately and then try to bring the two booted nodes together, one of them will crash with a CLUEXIT and will reboot to join the cluster from a known state of being rebooted.
In other words, if both nodes are booted and you plug the network cable into wb4 then I would expect one of them to crash. It's expected behaviour.
You need to ensure that the two nodes do not share storage with them booting separately. This would corrupt the disks very quickly. You also need to ensure that the systems, if they boot from the same disk, boot from separate directory trees.
if you have had both nodes running with shared storage mounted on both then you're likely to have to get a backup tape out and restore all of the data disks...
Steve