Operating System - OpenVMS
1753943 Members
9003 Online
108811 Solutions
New Discussion юеВ

Re: Changing the number of votes for the quorum disk

 
Carleen Nutter
Advisor

Changing the number of votes for the quorum disk

What is the proper way to change the number of votes the quorum disk contributes. Is it enough to change the QDSKVOTES on each node (edit modparams and write current)and reboot the cluster or does something else need to happen?
21 REPLIES 21
John Gillings
Honored Contributor

Re: Changing the number of votes for the quorum disk

Carleen,

That's about the size of it. But I wouldn't use "write current", rather, edit MODPARAMS and use AUTOGEN to adjust the values.

Note that you should probably do a full cluster reboot, rather than a rolling reboot.

Something that isn't necessary, but I'd recommend is that all the SYSGEN paramters which need to be identical across the cluster should be stored in an AGEN$INCLUDE_PARAMS file on your cluster common area. That means you only need to change it once, and you can be somewhat more confident that all your nodes agree. The only time values will be out of synch is if you've made a change and not run AUTOGEN on one of the nodes. This can be checked automatically by comparing the modification date on the common parameter file with that of SYS$SYSTEM:SETPARAMS.DAT.

So, for example, your MODPARAMS.DAT might have the lines:

! Cluster common parameters
AGEN$INCLUDE_PARAMS CLUSTER$COMMON:CLUSTERPARAMS.DAT
!

Obviously you need to have node specific stuff like SCSNODE in there as well, but
anything that is the same on all nodes goes in CLUSTER$COMMMON:CLUSTERPARAMS.DAT. For example (4 node cluster with quorum disk):

VAXCLUSTER=2
VOTES=1
DISK_QUORUM="$46$DIA31 "
QDSKVOTES=4
EXPECTED_VOTES=8
QDSKINTERVAL=10

NISCS_LOAD_PEA0=1
NISCS_PORT_SERV=0
MIN_NISCS_MAX_PKTSZ=4468
MSCP_LOAD=1
MSCP_SERVE_ALL=1
TMSCP_LOAD=1
TMSCP_SERVE_ALL=1
!
ALLOCLASS=46
!
MIN_SCSCONNCNT=40
!
PAGEFILE=0
SWAPFILE=0
DUMPFILE=0
dumpstyle=1
!
! security compliance
!
LGI_BRK_TMO=720
LGI_HID_TIM=86400
MAXSYSGROUP=7
MIN_MAXBUF=4096
!
TTY_DEFCHAR =%x180010B8 ! 24 lines+SCOPE+(NOWRAP)+LOWER+TTSYNC+HOSTSYNC+ESCAPE
TTY_DEFCHAR2=%x00023002 ! DISCONNECT+EDITING+INSERT+AUTOBAUD
A crucible of informative mistakes
Carleen Nutter
Advisor

Re: Changing the number of votes for the quorum disk

I have a 4 node cluster. Each node gets 1 votes. I have QDSKVOTES=3 and Expected_votes = 7. Sometimes I need only
1 nodes up - so with qdskvotes=3, I can.
Problem is, a show cluster command indicates that the quorum disk is only contributing 1 vote. I did pass this scheme by tech support a few months ago.

With all 4 nodes booted, show cluster
says


CL_EXP = 7
CL_QUORUM=4
CL_VOTES=5
CL_QDV=1




Martin P.J. Zinser
Honored Contributor

Re: Changing the number of votes for the quorum disk

Hello Carleen,

did you try to have just one node up already? If yes it sounds like more of a display problem, if not there might be a real issue. Can you give us a few more details on your setup like the VMS version you are using?

Greetings, Martin
Lokesh_2
Esteemed Contributor

Re: Changing the number of votes for the quorum disk

Hi,

AFAIK, The CL_QDVOTES in the cluster are calculated as the minimum of qdskvotes on any node of the cluster. You need to check QDSKVOTES on all the nodes of your cluster.

Are you using satellite nodes in your cluster ?

Thanks & regards,
Lokesh Jain
What would you do with your life if you knew you could not fail?
Mobeen_1
Esteemed Contributor

Re: Changing the number of votes for the quorum disk

Carleen,
We could change the expected votes system param by 2 methods

Method1#
1. Use Current
2. Set
3. Write Current
4. Modify Modparams.dat

Method2#
1. Modify Modparams.dat
2. Use AUTOGEN SETPARAMS

In Method#1, the values will be changed in your current database and once you reboot your node, the values will be in the sysgen dB permanently

In Method#2, the values will take effect upon a reboot of the node.

The practise i have been following is Method#1.

regards
Mobeen
Carleen Nutter
Advisor

Re: Changing the number of votes for the quorum disk

Some clarifications:
The VMS version is 7.2-2 with patches.

Each of the 4 nodes (via show current and show active):
Votes=1
Expected_votes=7
Qdskvotes=3

There are no satellite nodes.

Prior to a about 10 days ago, this was a
3 node cluster with 1 satellite and the quorum disk had only 1 vote.
Since this is a production cluster, I dont have the luxury of shutting nodes down to see if it's a display issue or a real issue.
I did reboot 1 node yesterday and noted that
the CLU_QUORUM values stayed at 4 (as it should have) but the CL_VOTES went from 5 to 4 and CL_QDV stayed at 1. It could still be a display issue - but I am unsure and dont want to be surprised when/if I lose 2 nodes or shutdown 2 nodes and have the rest of the cluster hang.

My thought are that it's a display problem or that I missed a step when changing the
QDSKVOTES from 1 to 3 - should the quorum.dat file on the quorum disk get updated in some manner. The modify date on that file did not change when I added the 4th node and changed qdskvotes.
Eberhard Wacker
Valued Contributor

Re: Changing the number of votes for the quorum disk

Hello Carleen,

after you did finish to set up your final configuration: was the cluster down in TOTAL at least 1 time ? This is a really important item regarding this discusssion !
The cl_qdskvotes will remain at 1 till this has been done (at least I think so and I├в m quite sure about this but I├в m not able to test it, I do not have a cluster for my own where I can do what I want).

A few hints regarding your configuration:

Quorum disk votes 3 are only necessary to let 1 node running when all others are down (e.g. shutdown without /remove_node option and/or crashed). Avoid of partitioning is realized via the expected votes setting.
If you use quorum disk vote = 1 and expected votes = 5 the quorum is 3 i.e. 2 nodes can crash and your cluster is still running. You can then adjust your remaining running configuration with the dcl command SET CLUSTER/EXPECTED. After this even the 3rd node can crash and the last node will continue to run.
With this configuration you have only a minor problem to boot the very first node alone after the whole cluster was shut down. This problem can be resolved by making a conversational boot and set expected votes to 1, 2 or 3. The node will boot and form the cluster with temporary cluster quorum 2. The next node can boot ├в normal├в , due to its setting of expected votes 5 the cluster quorum will now be set to 3 (and this is fulfilled by the now running participating quorum contributors).
If you use the ├в officially├в recommended value of 3 for the quorum disk votes then you can get into trouble when the quorum disk gets defect. If then in addition one of the nodes crashes the cluster will hang (maybe this can be avoided with a set cluster/expected=4 if you have time to do this but I don├в t know if this will really work in such a case).

There are ways to keep a cluster go on running after the case of a hung: via CTRL-P and setting quickly a few instructions on the console level or using the features of AMDS /Availability Manager.

At last:
It can be that there is a VMS software bug. We do not use V7.2-2 so I cannot prove any statement regarding this software version. But with V7.2-1H1 we did have a cluster quorum adjustment problem when shutting down a node with option /remove_node !!! We never got an official solution for this, all released patches did not solve the problem. It seemed that we were the only customer in the whole wide world who did have this problem. Unbelievable for me but it seemed so.
Our workaround was the manual execution of $set cluster/exp (as described above) when having shutdowned two nodes of this 10 node cluster.
Now the positive aspect (for us): this problem did NOT reoccur after having upgraded to VMS V7.3-1 !!!
Mobeen_1
Esteemed Contributor

Re: Changing the number of votes for the quorum disk

Carleen,
Please check this out, it should be able to give you enough information

1. VOTES

2. EXPECTED VOTES

The following definitions/formulas need to be kept in mind

1. When nodes in the OpenVMS Cluster boot, the connection manager uses the largest value for EXPECTED_VOTES of all systems present to derive an estimated quorum value according to the following formula:
Estimated quorum = (EXPECTED_VOTES + 2)/2 | Rounded down

2. During a state transition, the connection manager dynamically computes the cluster quorum value to be the maximum of the following:
The current cluster quorum value
The largest of the values calculated from the following formula, where the EXPECTED_VOTES value is largest value specified by any node in the cluster:
QUORUM = (EXPECTED_VOTES + 2)/2 | Rounded down
The value calculated from the following formula, where the VOTES system parameter is the total votes held by all cluster members:
QUORUM = (VOTES + 2)/2 | Rounded down

The following link will give you adequate reading on VMS cluster configs

http://broadcast.ipv7.net:81/openvms-manual/72final/4477/4477pro_002.html

Let me know if you need any specific information

regards
Mobeen
Henk Ouwersloot
Advisor

Re: Changing the number of votes for the quorum disk

Hello Carleen,

I did a short test on my cluster. The setup your are using is ok (if set on every node in your cluster):

VOTES = 1
EXPECTED_VOTES = 7
QDSKVOTES = 3

Please check the following:

1 - SHOW CLUSTER/CONT
2 - ADD FORMED

If the DATE/TIME in the field "FORMED" is before the time you changed the paramter QDSKVOTES, then you need to reboot your entire cluster.

This MUST be a cluster reboot and not node by node! This should solve your problem.

Kind Regards,
Henk