<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: quorum disk question in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287603#M91358</link>
    <description>In a healthy cluster, it really doesn't matter that much.  It becomes an issue if one of the nodes is down for maintenance and the quorum wasn't correctly defined.  AND the SCS link between the two becomes disabled somehow.&lt;BR /&gt;&lt;BR /&gt;For my two-node cluster, I have defined the system disk as the quorum disk.  That became legal sometime in v 7.x of OpenVMS.  As long as both nodes are up, you can form a quorum with two (active CPU) votes.  But if one is down for maintenance, without a quorum disk it would be at least theoretically possible to "split" the cluster (a.k.a. partitioned cluster) such that cross-cluster locking would be impaired.&lt;BR /&gt;&lt;BR /&gt;I think there has to be a failure of the SCS link, whatever it is.  But I tried to fool around with this once.  If you have a valid quorum disk and the connection between the nodes isn't right, the SECOND node to attempt to join your cluster cannot get to the quorum disk because the controller in that case just keeps it locked onto the first cluster member.  Not sure I fully understand quite HOW it knows to do that, but it seems to do it based on my experiments.&lt;BR /&gt;&lt;BR /&gt;In such a circumstance there is a possibility of destructive interference occurring because intra-cluster locking will not be honored.  This is dangerous.&lt;BR /&gt;</description>
    <pubDate>Wed, 15 Oct 2008 16:09:55 GMT</pubDate>
    <dc:creator>Richard W Hunt</dc:creator>
    <dc:date>2008-10-15T16:09:55Z</dc:date>
    <item>
      <title>quorum disk question</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287602#M91357</link>
      <description>&lt;BR /&gt;Does the defining of a quorum disk have any bearing on a cluster of two nodes?&lt;BR /&gt;&lt;BR /&gt;I'm testing for a san change, and the two test boxes seem to have no problem with a quorum disk that doesn't exist.&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Oct 2008 15:40:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287602#M91357</guid>
      <dc:creator>Gregg Parmentier</dc:creator>
      <dc:date>2008-10-15T15:40:27Z</dc:date>
    </item>
    <item>
      <title>Re: quorum disk question</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287603#M91358</link>
      <description>In a healthy cluster, it really doesn't matter that much.  It becomes an issue if one of the nodes is down for maintenance and the quorum wasn't correctly defined.  AND the SCS link between the two becomes disabled somehow.&lt;BR /&gt;&lt;BR /&gt;For my two-node cluster, I have defined the system disk as the quorum disk.  That became legal sometime in v 7.x of OpenVMS.  As long as both nodes are up, you can form a quorum with two (active CPU) votes.  But if one is down for maintenance, without a quorum disk it would be at least theoretically possible to "split" the cluster (a.k.a. partitioned cluster) such that cross-cluster locking would be impaired.&lt;BR /&gt;&lt;BR /&gt;I think there has to be a failure of the SCS link, whatever it is.  But I tried to fool around with this once.  If you have a valid quorum disk and the connection between the nodes isn't right, the SECOND node to attempt to join your cluster cannot get to the quorum disk because the controller in that case just keeps it locked onto the first cluster member.  Not sure I fully understand quite HOW it knows to do that, but it seems to do it based on my experiments.&lt;BR /&gt;&lt;BR /&gt;In such a circumstance there is a possibility of destructive interference occurring because intra-cluster locking will not be honored.  This is dangerous.&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Oct 2008 16:09:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287603#M91358</guid>
      <dc:creator>Richard W Hunt</dc:creator>
      <dc:date>2008-10-15T16:09:55Z</dc:date>
    </item>
    <item>
      <title>Re: quorum disk question</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287604#M91359</link>
      <description>If you're asking this question, I'd encourage reading the following introduction to the quorum scheme and to the VOTES and EXPECTED_VOTES and related clustering knobs:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://64.223.189.234/node/153" target="_blank"&gt;http://64.223.189.234/node/153&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Short answer: If each node has one vote here (as is typical), you'll need a quorum disk (or a third voting node) to keep the cluster running when one of the two nodes is shut down.  Here's the two-node write up:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://64.223.189.234/node/569" target="_blank"&gt;http://64.223.189.234/node/569&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I would discourage "creative" settings of VOTES and EXPECTED_VOTES; the disk data corruptions that can arise are generally irreparable if (when?) those "creative" settings go awry -- short of rolling in your BACKUPs, that is.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Oct 2008 16:12:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287604#M91359</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-10-15T16:12:03Z</dc:date>
    </item>
    <item>
      <title>Re: quorum disk question</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287605#M91360</link>
      <description>Gregg,&lt;BR /&gt;&lt;BR /&gt;Concur. In a two node cluster, with symmetric VOTES, it is easy to end up with a cluster hang. &lt;BR /&gt;&lt;BR /&gt;The quorum di9k is the obvious solution to that.&lt;BR /&gt;&lt;BR /&gt;As Hoff said, care is needed.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Wed, 15 Oct 2008 16:41:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/quorum-disk-question/m-p/4287605#M91360</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2008-10-15T16:41:24Z</dc:date>
    </item>
  </channel>
</rss>

