Operating System - OpenVMS
Showing results for 
Search instead for 
Did you mean: 

Clustering and quorum disk

Go to solution
Russ Carraro
Regular Advisor

Clustering and quorum disk

Customer has removed two of the four OpenVMS nodes in their cluster. Currently expected_votes is 7, qdskvotes is 3 and votes is 1. They're changing expected_votes to 3 and qdskvotes to 1. They'd like to change the quorum disk, which is connected via an HSD30, from a "disk" device to a "mirror set" so that OpenVMS still sees it as a non-shadowed device but they can replace the drive if it fails. As it now stands, what would happen if the quorum disk is dismounted? Will initialing a mirror erase the quorum.dat information? Basically, what would happen if they dismount the quorum disk, log onto the controller and create the mirror set.
Uwe Zessin
Honored Contributor

Re: Clustering and quorum disk

Unless the disk container was set to TRANSPORTABLE, the size will not change if you move from a single disk to a mirrorset.

I can't check right now, but I think you can turn the disk transparently into a mirrorset. From memory:
> mirror disk100 m1
> set m1 nopolicy
> set m1 membership=2
> set m1 replace=disk200
> set m1 policy=best_performance
Robert Gezelter
Honored Contributor

Re: Clustering and quorum disk


Personally, I would have preferred doing this transition BEFORE the two nodes were removed, however, things are as they are.

If I am reading things correctly, after changing the votes, EXPECTED_VOTES will be 3 and each of the three nodes (two systems, one quorum disk) will have a single vote. If that is done correctly, the failure of a single node will not cause a problem.

However, extreme caution (or outside assistance) is recommended.

- Bob Gezelter, http://www.rlgsc.com
Jon Pinkley
Honored Contributor

Re: Clustering and quorum disk


I see that there have been two responses since I started this note, so there is some redundant info.

I've never used a HSD30, but it appears to be similar to an HSZ40 but with a DSSI system interface.

If that's the case, then there is no need to dismount the unit to convert from a JBOD device to a Mirrorset. To do this, use the HSOF command MIRROR disk_device mirrorset_name. This converts disk to a single member mirrorset. You can then set the add other disks to the mirrorset. This can all be done while the unit is being used by VMS. All contents of the disk are preserved, assuming this was not a "transportable" disk that did not have the metadata, but transportable disks were not the default.

example, assuming quorum disk is unit 100, and disk is 2DISK201, and you want to add DISK302 to the new mirrorset.

HSD> mirror disk201 qdisk ! crates QDISK mirrorset
HSD> set qdisk nopolicy ! this is the default
HSD> set qdisk mem=2
HSD> set qdisks replace=disk302 ! adds DISK302 to QDISK mirrorset
HSD> set qdisk policy=

To change the votes of the quorum disk is going to require a cluster reboot (if I am remembering correctly). There is no need to change right away, however, the loss of the quorum disk will cause the cluster to hang as is. You could change the votes of each member to 2, and reboot them one at a time. This will allow the two remaining members to form a cluster without the quorum disk, and allow one node plus the quorum disk to maintain quorum.

So this is what I would do.

1. mirror the quorum disk, and add second member.

2. modify modparams.dat to increase votes from 1 to 2 for each node.

3. Run Autogen
$ @sys$update:autogen savparams genparams
$! verify that parameters look ok
$ diff sys$system:setparams.dat /par /mat=1
$! write the parameters for next reboot
$ @sys$updata:autogen setparam setparam

The next time the systems reboot, they will get more say in cluster membership. But there is no urgent need to reboot, the quorum disk is now more "reliable" than it used to be.

Summary: 1+1+1 with expected votes=3 is esentially the same as 2+2+3 with 7 as expected votes. So I don't see a big advantage to modifying expected votes from 7 to 3, along with the required cluster shutdown.

Good luck,

it depends
Honored Contributor

Re: Clustering and quorum disk

I'd tell my customer that there's nothing of lasting value on a quorum disk that's specific to OpenVMS; that the quorum.dat file can be rebuilt, and that its contents need not be preserved.

Here, I'd set up my customer with the new disk configured into a RAIDset, possibly with one member for now. (If my customer had quorum presently at two and had both nodes running, I'd expect I could pull the quorum disk offline here, too, and relocate that into the RAIDset.) This assumes the new quorum disk has a new device name.

Next, I'd suggest that my customer reboot to reset the system parameters for the votes, expected_votes and qdiskvotes and disk_quorum device name values. I'm here assuming the quorum device name will change.

As part of this, I'd boot both nodes to reach quorum (2, assuming each node has one and the qdisk has one), then mount up the quorum disk. When the dust settles, the quorum.dat file should be recreated on the (new) quorum disk, and all will be right.

(I would not encourage my customer to yank the quorum disk out from underneath a running cluster. Not without trying that on a parallel configuration. I'd set up for and reboot, unless there's a particular requirement for performing this fully-online.)

Russ Carraro
Regular Advisor

Re: Clustering and quorum disk

Thanks for all the help.