<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Clustering and quorum disk in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117919#M90481</link>
    <description>I'd tell my customer that there's nothing of lasting value on a quorum disk that's specific to OpenVMS; that the quorum.dat file can be rebuilt, and that its contents need not be preserved.&lt;BR /&gt;&lt;BR /&gt;Here, I'd set up my customer with the new disk configured into a RAIDset, possibly with one member for now.   (If my customer had quorum presently at two and had both nodes running, I'd expect I could pull the quorum disk offline here, too, and relocate that into the RAIDset.)  This assumes the new quorum disk has a new device name.&lt;BR /&gt;&lt;BR /&gt;Next, I'd suggest that my customer reboot to reset the system parameters for the votes, expected_votes and qdiskvotes and disk_quorum device name values.  I'm here assuming the quorum device name will change.&lt;BR /&gt;&lt;BR /&gt;As part of this, I'd boot both nodes to reach quorum (2, assuming each node has one and the qdisk has one), then mount up the quorum disk.  When the dust settles, the quorum.dat file should be recreated on the (new) quorum disk, and all will be right.&lt;BR /&gt;&lt;BR /&gt;(I would not encourage my customer to yank the quorum disk out from underneath a running cluster.  Not without trying that on a parallel configuration.  I'd set up for and reboot, unless there's a particular requirement for performing this fully-online.)&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 07 Jul 2008 19:10:29 GMT</pubDate>
    <dc:creator>Hoff</dc:creator>
    <dc:date>2008-07-07T19:10:29Z</dc:date>
    <item>
      <title>Clustering and quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117915#M90477</link>
      <description>Customer has removed two of the four OpenVMS nodes in their cluster.  Currently expected_votes is 7, qdskvotes is 3 and votes is 1.  They're changing expected_votes to 3 and qdskvotes to 1.  They'd like to change the quorum disk, which is connected via an HSD30, from a "disk" device to a "mirror set" so that OpenVMS still sees it as a non-shadowed device but they can replace the drive if it fails.  As it now stands, what would happen if the quorum disk is dismounted?  Will initialing a mirror erase the quorum.dat information?  Basically, what would happen if they dismount the quorum disk, log onto the controller and create the mirror set.</description>
      <pubDate>Mon, 07 Jul 2008 16:46:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117915#M90477</guid>
      <dc:creator>Russ Carraro</dc:creator>
      <dc:date>2008-07-07T16:46:46Z</dc:date>
    </item>
    <item>
      <title>Re: Clustering and quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117916#M90478</link>
      <description>Unless the disk container was set to TRANSPORTABLE, the size will not change if you move from a single disk to a mirrorset.&lt;BR /&gt;&lt;BR /&gt;I can't check right now, but I think you can turn the disk transparently into a mirrorset. From memory:&lt;BR /&gt;&amp;gt; mirror disk100 m1&lt;BR /&gt;&amp;gt; set m1 nopolicy&lt;BR /&gt;&amp;gt; set m1 membership=2&lt;BR /&gt;&amp;gt; set m1 replace=disk200&lt;BR /&gt;&amp;gt; set m1 policy=best_performance</description>
      <pubDate>Mon, 07 Jul 2008 18:35:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117916#M90478</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2008-07-07T18:35:42Z</dc:date>
    </item>
    <item>
      <title>Re: Clustering and quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117917#M90479</link>
      <description>Russ,&lt;BR /&gt;&lt;BR /&gt;Personally, I would have preferred doing this transition BEFORE the two nodes were removed, however, things are as they are.&lt;BR /&gt;&lt;BR /&gt;If I am reading things correctly, after changing the votes, EXPECTED_VOTES will be 3 and each of the three nodes (two systems, one quorum disk) will have a single vote. If that is done correctly, the failure of a single node will not cause a problem.&lt;BR /&gt;&lt;BR /&gt;However, extreme caution (or outside assistance) is recommended.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Mon, 07 Jul 2008 18:59:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117917#M90479</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2008-07-07T18:59:43Z</dc:date>
    </item>
    <item>
      <title>Re: Clustering and quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117918#M90480</link>
      <description>Russ,&lt;BR /&gt;&lt;BR /&gt;I see that there have been two responses since I started this note, so there is some redundant info.&lt;BR /&gt;&lt;BR /&gt;I've never used a HSD30, but it appears to be similar to an HSZ40 but with a DSSI system interface.&lt;BR /&gt;&lt;BR /&gt;If that's the case, then there is no need to dismount the unit to convert from a JBOD device to a Mirrorset.  To do this, use the HSOF command MIRROR disk_device mirrorset_name.  This converts disk to a single member mirrorset.  You can then set the add other disks to the mirrorset.  This can all be done while the unit is being used by VMS.  All contents of the disk are preserved, assuming this was not a "transportable" disk that did not have the metadata, but transportable disks were not the default.&lt;BR /&gt;&lt;BR /&gt;example, assuming quorum disk is unit 100, and disk is 2DISK201, and you want to add DISK302 to the new mirrorset.&lt;BR /&gt;&lt;BR /&gt;HSD&amp;gt; mirror disk201 qdisk ! crates QDISK mirrorset&lt;BR /&gt;HSD&amp;gt; set qdisk nopolicy ! this is the default&lt;BR /&gt;HSD&amp;gt; set qdisk mem=2&lt;BR /&gt;HSD&amp;gt; set qdisks replace=disk302 ! adds DISK302 to QDISK mirrorset&lt;BR /&gt;HSD&amp;gt; set qdisk policy=&lt;SPECIFY auto="" replacement="" policy=""&gt;&lt;BR /&gt;&lt;BR /&gt;To change the votes of the quorum disk is going to require a cluster reboot (if I am remembering correctly).  There is no need to change right away, however, the loss of the quorum disk will cause the cluster to hang as is.  You could change the votes of each member to 2, and reboot them one at a time.  This will allow the two remaining members to form a cluster without the quorum disk, and allow one node plus the quorum disk to maintain quorum.&lt;BR /&gt;&lt;BR /&gt;So this is what I would do.&lt;BR /&gt;&lt;BR /&gt;1. mirror the quorum disk, and add second member.&lt;BR /&gt;&lt;BR /&gt;2. modify modparams.dat to increase votes from 1 to 2 for each node.  &lt;BR /&gt;&lt;BR /&gt;3. Run Autogen&lt;BR /&gt;$ @sys$update:autogen savparams genparams&lt;BR /&gt;$! verify that parameters look ok&lt;BR /&gt;$ diff sys$system:setparams.dat /par /mat=1&lt;BR /&gt;$! write the parameters for next reboot&lt;BR /&gt;$ @sys$updata:autogen setparam setparam&lt;BR /&gt;&lt;BR /&gt;The next time the systems reboot, they will get more say in cluster membership.  But there is no urgent need to reboot, the quorum disk is now more "reliable" than it used to be.&lt;BR /&gt;&lt;BR /&gt;Summary:  1+1+1 with expected votes=3 is esentially the same as 2+2+3 with 7 as expected votes.  So I don't see a big advantage to modifying expected votes from 7 to 3, along with the required cluster shutdown.&lt;BR /&gt;&lt;BR /&gt;Good luck,&lt;BR /&gt;&lt;BR /&gt;Jon&lt;/SPECIFY&gt;</description>
      <pubDate>Mon, 07 Jul 2008 19:08:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117918#M90480</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2008-07-07T19:08:43Z</dc:date>
    </item>
    <item>
      <title>Re: Clustering and quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117919#M90481</link>
      <description>I'd tell my customer that there's nothing of lasting value on a quorum disk that's specific to OpenVMS; that the quorum.dat file can be rebuilt, and that its contents need not be preserved.&lt;BR /&gt;&lt;BR /&gt;Here, I'd set up my customer with the new disk configured into a RAIDset, possibly with one member for now.   (If my customer had quorum presently at two and had both nodes running, I'd expect I could pull the quorum disk offline here, too, and relocate that into the RAIDset.)  This assumes the new quorum disk has a new device name.&lt;BR /&gt;&lt;BR /&gt;Next, I'd suggest that my customer reboot to reset the system parameters for the votes, expected_votes and qdiskvotes and disk_quorum device name values.  I'm here assuming the quorum device name will change.&lt;BR /&gt;&lt;BR /&gt;As part of this, I'd boot both nodes to reach quorum (2, assuming each node has one and the qdisk has one), then mount up the quorum disk.  When the dust settles, the quorum.dat file should be recreated on the (new) quorum disk, and all will be right.&lt;BR /&gt;&lt;BR /&gt;(I would not encourage my customer to yank the quorum disk out from underneath a running cluster.  Not without trying that on a parallel configuration.  I'd set up for and reboot, unless there's a particular requirement for performing this fully-online.)&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 07 Jul 2008 19:10:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117919#M90481</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-07-07T19:10:29Z</dc:date>
    </item>
    <item>
      <title>Re: Clustering and quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117920#M90482</link>
      <description>Thanks for all the help.</description>
      <pubDate>Tue, 15 Jul 2008 12:37:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/clustering-and-quorum-disk/m-p/5117920#M90482</guid>
      <dc:creator>Russ Carraro</dc:creator>
      <dc:date>2008-07-15T12:37:54Z</dc:date>
    </item>
  </channel>
</rss>

