- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- remove a quorum disk when adding a 3rd node to clu...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 01:50 AM
тАО10-22-2004 01:50 AM
Re: remove a quorum disk when adding a 3rd node to cluster
Do you really want to remove the quorum disk? With three nodes you have to have two nodes up to maintain a quorum.
That is NOT true.
Well... it IS true if you have two nodes crashing on you at the same time.....
But if you do a normal shutdown of first one node (and DO NOT forget the REMOVE_NODE option), then as soon as that node is down you can do the same to another node, and continue happily with a one-node cluster.
This same scheme only gets a little trickier if you have four (or more) nodes which you bring down to one. THEN, to get back, you will have to boot your first returning node conversational and set a lower EXPECTED_VOTES (see above) to prevent hanging until the next one boots and brings quorum.
Actually, to be exact, read this as VOTES, not nodes, in case not all nodes have equal votes...
hth
Cheers.
Have one on me.
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 01:56 AM
тАО10-22-2004 01:56 AM
Re: remove a quorum disk when adding a 3rd node to cluster
thanks
Paul
who real soon is going to have many drinks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 02:44 AM
тАО10-22-2004 02:44 AM
Re: remove a quorum disk when adding a 3rd node to cluster
that is definitely WITHOUT.
In my view (not necessary everybody's) a Quorum disk is just a tiny, stupid trick, regrettably necessary if you are unfortunate enough to be really needing a cluster, but painfully restricted to two, both potentially lonely, nodes...
I am gonna join a 100 KM traffic jam now, and when I have conquered that I will join you in spirit. I have some good beers cold.
Cheers.
Have one on me.
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 02:48 AM
тАО10-22-2004 02:48 AM
Re: remove a quorum disk when adding a 3rd node to cluster
Jan, I hope you're doing well... I really HATE traffic jams!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 04:09 AM
тАО10-22-2004 04:09 AM
Re: remove a quorum disk when adding a 3rd node to cluster
The downside of this is that after a node leaves unexpectedly (e.g. power supply failure, cluster interconnect hardware failure, or Control-P/Halt), after the RECNXINTERVAL period has elapsed, it will take additional time (up to 4 times QDSKINTERVAL seconds) to re-validate the quorum disk's votes, and that can cause a delay in the cluster regaining quorum. With the old default value of 10 seconds for QDSKINTERVAL, that could take up to 40 seconds, which was a long time. With the new default value of 3 seconds for QDSKINTERVAL, that could now take up to 12 seconds.
And keep in mind that anytime you use the REMOVE_NODE option on SHUTDOWN to take the cluster below a majority of the potential votes, you have voluntarily created a situation where the quorum scheme cannot totally protect you against a partitioned cluster. For example, if you take the cluster down to a single node, and it continues to run, but its LAN adapter or whatever you're using for a cluster interconnect fails, then there's nothing to prevent the other 2 nodes from booting, forming a separate cluster (as 2 votes are enough to achieve quorum in this case), and trashing the shared SAN disks. (That is, there's nothing to prevent this happening other than your human intervention as a system manager to prevent someone from trying to boot the other 2 nodes at once).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 04:18 AM
тАО10-22-2004 04:18 AM
Re: remove a quorum disk when adding a 3rd node to cluster
ok let me step back one step and describe the environment...
we have 3 nic cards in each machine one we use as a private lan to a vendor, second one is the main cluster interconnect, the 3rd is for the general lan and a failover for the cluster communications... we actually have a com file that changes the pedriver paths so that we come up as this. we do have the shared SAN disks and local disks for page and swap.
So if that changes anyones thinking let me know...
thanks
Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 05:18 AM
тАО10-22-2004 05:18 AM
Re: remove a quorum disk when adding a 3rd node to cluster
Enabling cluster communications on your 1st LAN (but lowering its priority under SCACP so that while PEDRIVER will track its status and availability using periodic Hello packets, it doesn't really actually get used unless and until both of the other 2 links fail, could give you 3X redundancy instead of 2X, reducing the risk even further. But you might consider that to be overkill, especially if you're tracking failures on the existing two paths using LAVC$FAILURE_ANALYSIS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 05:40 AM
тАО10-22-2004 05:40 AM
Re: remove a quorum disk when adding a 3rd node to cluster
I already knew that where I am pessimistic about Murphy's Law (yes, he was an optimist!), I have to admit that you are paranoid about it. (you must have discussed this a lot with Tom Speake, I guess. He was equally paranoid on the issue).
And yes, if your disk communication path can not also function as an SCS path, like a shared SCSI bus, you ARE fundamentally right.
SAN interconnect is a class in its own. It was introduced to VMS NOT supporting SCS, but if my memory serves me well, it does cluster traffic since V7.3-2 (or was it some ECO, or was it intended to, but not yet there. I can not check that right now).
If, however, the path to the disks CAN also function as interconnect (like the good old CI, DSSI, SAN-if-supporting SCS...), then I can not see a way to a partitioned cluster. Teach me if I am not yet pessimistic enough.
Uwe,
not too bad today, made it within 2 hours, thats over an hour better than yesterday.
The way to fight it really is "lean back and enjoy the music"
Getting stressy will not win you 5 seconds, and is very bad for your health!
Cheers.
Join me in a good beer.
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2004 09:36 AM
тАО10-22-2004 09:36 AM
Re: remove a quorum disk when adding a 3rd node to cluster
Support for Fibre Channel as a LAN (including cluster interconnect support) is presently slated for VMS version 8.3.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-24-2004 08:24 PM
тАО10-24-2004 08:24 PM
Re: remove a quorum disk when adding a 3rd node to cluster
Give each node 1 point and keep the quorum disk with 2 votes. Expected botes on 5.
Thus 1 node can start the cluster on his own (if he sees the q disk), no split clusters are possible and the complete cluster can survive a quorum disk loss.
It all depends on what you want.
Wim
- « Previous
- Next »