<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Migrating from Quorum Disk to Quorum Node in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974711#M75823</link>
    <description>"The systems each have three GB network adapters (on different PCI busses). Two of those are connected to two Cisco 6509 switches. The third is connected to a Cisco 3550-12T that is not connected to the network (essentially acting as a Star Coupler). No SCACP channel prioritization is being done, VMS distributes the SCS traffic on the channels as it sees fit. SCS traffic is seen on all three channels, although more on the third channel that is dedicated to SCS."&lt;BR /&gt;&lt;BR /&gt;Hello, we HAD our config set up this way until recently. We had several problems due to some flappig on our main CISCO switches...the connection did not really go away in both directions, so the SCS traffic did not fail over to the other GIG links through other switches. Several times the links recovered and cluster transition ABORTED..several times a node crashed due to the lost connection. HP support suggested that we set our SCS private network to a higher priority to force traffic to that link. We did that (along with Networking fixing their issue on the Cisco switches). That has solved our problem, so far.&lt;BR /&gt;&lt;BR /&gt;Probably a rare case does not happen often, but it did happen to us. If you need additional info/proof I can probably get that from our other Sys Admin that delt with HP directly on this one.&lt;BR /&gt;&lt;BR /&gt;Good day,&lt;BR /&gt;&lt;BR /&gt;Bill&lt;BR /&gt;</description>
    <pubDate>Fri, 21 Apr 2006 11:05:10 GMT</pubDate>
    <dc:creator>William Brown_2</dc:creator>
    <dc:date>2006-04-21T11:05:10Z</dc:date>
    <item>
      <title>Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974700#M75812</link>
      <description>We have a three-node AlphaServer ES45 OpenVMS cluster running OpenVMS Alpha 7.3-2. Because we want the cluster to survive with only one system, we have a quorum disk with 2 votes, and expected_votes is set to 5. We are going to replace the quorum disk with a quorum node using a DS10 booting off its own, internal disk, running OpenVMS Alpha 8.2.&lt;BR /&gt;&lt;BR /&gt;The steps I was planning on taking for the transition from quorum disk to quorum node are the following:&lt;BR /&gt;&lt;BR /&gt;1. Change the modparams files for the three ES45 systems to indicate DISK_QUORUM = "" and QDSKVOTES = 0, leaving EXPECTED_VOTES at 5. Run Autogen to set the parameters.&lt;BR /&gt;2. Copy CLUSTER_AUTHORIZE.DAT from the ES45 system disk to SYS$COMMON:[SYSEXE] on the DS10.&lt;BR /&gt;3. Change EXPECTED_VOTES on the DS10 to 5 and VOTES to 2.&lt;BR /&gt;4. Shut down all systems.&lt;BR /&gt;5. Boot the DS10, expect it to hang waiting for one additional vote.&lt;BR /&gt;6. Boot one of the ES45 systems, and there should be a working cluster.&lt;BR /&gt;7. Boot the second and third ES45.&lt;BR /&gt;&lt;BR /&gt;Am I missing anything in this plan?&lt;BR /&gt;What problems or issues might I expect to encounter?</description>
      <pubDate>Thu, 20 Apr 2006 11:51:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974700#M75812</guid>
      <dc:creator>Jim Geier_1</dc:creator>
      <dc:date>2006-04-20T11:51:35Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974701#M75813</link>
      <description>How are these systems connected?&lt;BR /&gt;&lt;BR /&gt;&amp;gt; [...] we want the cluster to survive with&lt;BR /&gt;&amp;gt; only one system [...]&lt;BR /&gt;&lt;BR /&gt;So, in the new scheme, where's the quorum if&lt;BR /&gt;the DS10 and one ES45 go down?&lt;BR /&gt;&lt;BR /&gt;What makes the DS10 better (more reliable?)&lt;BR /&gt;than the quorum disk?&lt;BR /&gt;&lt;BR /&gt;I don't see why you'd do it, but your&lt;BR /&gt;procedure looks plausible.</description>
      <pubDate>Thu, 20 Apr 2006 12:02:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974701#M75813</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2006-04-20T12:02:46Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974702#M75814</link>
      <description>&lt;BR /&gt;"Using an internal disk on the DS-10"  does this mean a single scsi disk?  For a quorum node, you may want to consider some form of redundancy on the system disk disk, either with shadowing or with hardware based raid.&lt;BR /&gt;&lt;BR /&gt;Your plan looks good.  Another, less immediate option, is to have Availablity Manager/AMDS running on the clustered nodes.  You can force quorum to be recalculated on the fly.  The downside of course being that this requires manual intervention, the cluster won't continue.  If you outages on multiple nodes, it can be a useful tool.&lt;BR /&gt;&lt;BR /&gt;Andy&lt;BR /&gt;  &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Apr 2006 12:09:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974702#M75814</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2006-04-20T12:09:58Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974703#M75815</link>
      <description>&amp;gt; So, in the new scheme, where's the quorum if the DS10 and one ES45 go down?&lt;BR /&gt;&lt;BR /&gt;Its gone. Works by design ;-)&lt;BR /&gt;Learn about AV/AMDS/IPC to recover the cluster&lt;BR /&gt;&lt;BR /&gt;&amp;gt; What makes the DS10 better (more reliable?) than the quorum disk?&lt;BR /&gt;&lt;BR /&gt;A quorum disk requires multiple watchers that are directly connected to it. Last time I fiddled with it, it did not work nice on a shared parallel SCSI bus.&lt;BR /&gt;Too many I/Os to the disk cause failures, too.</description>
      <pubDate>Thu, 20 Apr 2006 12:11:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974703#M75815</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-04-20T12:11:08Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974704#M75816</link>
      <description>Jim,&lt;BR /&gt;&lt;BR /&gt;think over Steven's remark twice (or maybe 3 or 4 times).&lt;BR /&gt;I can only think of ONE reason: your "production" systems are running some app(s) that are so flakey that THEY regularly crash your systems. (which in itself would be reason for redesign, but I also know situations where that deep desire is not an option).&lt;BR /&gt;&lt;BR /&gt;Other than that, an uneven number of nodes with equal votes is the most stable config (nice mathematics excersise to prove that!)&lt;BR /&gt;&lt;BR /&gt;A quorum node is really only a real gain if you have two active nodes (it has SOME advantage over a quorum disk), or, and most specifically, if you have 2 active SITES (providing the quorum node is at a third site).&lt;BR /&gt;If your active sites have more than one node, there are good reasons to have equal total votes PER SITE, spreading them evenly within each site. (Just use high-enough values per node to reach that condition).&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me (maybe in May in Nashua?)&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Apr 2006 12:23:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974704#M75816</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-04-20T12:23:38Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974705#M75817</link>
      <description>&amp;gt; Its gone. Works by design ;-)&lt;BR /&gt;&lt;BR /&gt;That's "it's", of course.  I'm just trying&lt;BR /&gt;to see how the new scheme satisfies the&lt;BR /&gt;stated requirement.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Learn about AV/AMDS/IPC to recover the cluster&lt;BR /&gt;&lt;BR /&gt;I've done this.  ("D/I 14 C" is stuck in my&lt;BR /&gt;head for some reason.)  As above, I fail to&lt;BR /&gt;see how the new scheme satisfies the stated&lt;BR /&gt;requirement.&lt;BR /&gt;&lt;BR /&gt;If manual intervention is allowed, who needs&lt;BR /&gt;more than the "the three ES45 systems"&lt;BR /&gt;(without even the quorum disk)?&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Apr 2006 12:23:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974705#M75817</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2006-04-20T12:23:43Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974706#M75818</link>
      <description>The systems each have three GB network adapters (on different PCI busses). Two of those are connected to two Cisco 6509 switches. The third is connected to a Cisco 3550-12T that is not connected to the network (essentially acting as a Star Coupler). No SCACP channel prioritization is being done, VMS distributes the SCS traffic on the channels as it sees fit. SCS traffic is seen on all three channels, although more on the third channel that is dedicated to SCS.&lt;BR /&gt;&lt;BR /&gt;Quorum disks are fine as long as they do not fail. I know it is not supposed to happen in the theory, but in my experience (and I have worked with clusters since the field test of VAX/VMS V4 in 1984) the typical scenario when a quorum disk fails is that the cluster becomes hung, and cannot be recovered without a complete cluster reboot. Not always, but far too often to overlook that possibility as a very likely and even expected outcome.&lt;BR /&gt;&lt;BR /&gt;I don't expect that the DS10 will have a better MTBF than the quorum disk, but replacing the quorum disk with the DS10 will yield better performance, and I suspect a better recovery scenario when the DS10 fails than what has been experienced when a quorum disk fails.&lt;BR /&gt;&lt;BR /&gt;Regarding the question about what happens when the DS10 AND an ES45 fail? In our current configuration, what happens when the quorum disk AND an ES45 fail at the same time? I don't see a real difference between those two scenarios.</description>
      <pubDate>Thu, 20 Apr 2006 13:08:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974706#M75818</guid>
      <dc:creator>Jim Geier_1</dc:creator>
      <dc:date>2006-04-20T13:08:40Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974707#M75819</link>
      <description>&amp;gt; replacing the quorum disk with the DS10 will yield better performance&lt;BR /&gt;&lt;BR /&gt;Agreed, when I played with this, cluster state transitions went _much_ faster - even with a reduced quorum disk polling interval.</description>
      <pubDate>Thu, 20 Apr 2006 13:13:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974707#M75819</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-04-20T13:13:23Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974708#M75820</link>
      <description>Jim,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt; Because we want the cluster to survive ..&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;... if that implies you want your cluster to also survive this operation, you just have to change the order of your actions somewhat:&lt;BR /&gt;1.&lt;BR /&gt;2.&lt;BR /&gt;3.&lt;BR /&gt;extra action: dismount QSK clusterwide.&lt;BR /&gt;5. (without the hang!)&lt;BR /&gt;4,6,7 hybrid: reboot the ES45's - one at a time. &lt;BR /&gt;Result: cluster still running, Qdsk replaced by DS10.&lt;BR /&gt;At no time any danger of split cluster.&lt;BR /&gt;Only between Extra and 5. running on the verge of quorum (an unexpected node leaving causes cluster hang, cancelled by DS10 joining).&lt;BR /&gt;&lt;BR /&gt;--- just the thought experiment by someone who HAS replaced hardware while keeping the cluster available --&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me (maybe in May in Nashua?)&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Apr 2006 13:16:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974708#M75820</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-04-20T13:16:21Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974709#M75821</link>
      <description>Regarding the application, we are running GE/IDX applications using InterSystems Cache' database software. The applications are very stable, and perform well. Cache' is VERY stable and performs extremely well compared to DSM.&lt;BR /&gt;&lt;BR /&gt;We typically have scheduled outages for various things every 2-3 months, and unscheduled individual node outages are running about 1 per quarter since we moved the systems to a new data center a year ago. Prior to that, we had system hardware failures about 2 or 3 times per year. Most failures are memory problems, but 4 or 5 per year does not really make a strong trend.</description>
      <pubDate>Thu, 20 Apr 2006 13:17:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974709#M75821</guid>
      <dc:creator>Jim Geier_1</dc:creator>
      <dc:date>2006-04-20T13:17:39Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974710#M75822</link>
      <description>Good suggestion, Jan.  We do have a scheduled downtime upcoming, so I have a rare time when I can reboot the entire cluster from "cold metal."</description>
      <pubDate>Thu, 20 Apr 2006 15:11:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974710#M75822</guid>
      <dc:creator>Jim Geier_1</dc:creator>
      <dc:date>2006-04-20T15:11:26Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974711#M75823</link>
      <description>"The systems each have three GB network adapters (on different PCI busses). Two of those are connected to two Cisco 6509 switches. The third is connected to a Cisco 3550-12T that is not connected to the network (essentially acting as a Star Coupler). No SCACP channel prioritization is being done, VMS distributes the SCS traffic on the channels as it sees fit. SCS traffic is seen on all three channels, although more on the third channel that is dedicated to SCS."&lt;BR /&gt;&lt;BR /&gt;Hello, we HAD our config set up this way until recently. We had several problems due to some flappig on our main CISCO switches...the connection did not really go away in both directions, so the SCS traffic did not fail over to the other GIG links through other switches. Several times the links recovered and cluster transition ABORTED..several times a node crashed due to the lost connection. HP support suggested that we set our SCS private network to a higher priority to force traffic to that link. We did that (along with Networking fixing their issue on the Cisco switches). That has solved our problem, so far.&lt;BR /&gt;&lt;BR /&gt;Probably a rare case does not happen often, but it did happen to us. If you need additional info/proof I can probably get that from our other Sys Admin that delt with HP directly on this one.&lt;BR /&gt;&lt;BR /&gt;Good day,&lt;BR /&gt;&lt;BR /&gt;Bill&lt;BR /&gt;</description>
      <pubDate>Fri, 21 Apr 2006 11:05:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974711#M75823</guid>
      <dc:creator>William Brown_2</dc:creator>
      <dc:date>2006-04-21T11:05:10Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974712#M75824</link>
      <description>One change I would make. I would raise the priority on the dedicated LAN. As long as it is up, it will use it. I've often seen &lt;BR /&gt;clusters use the less desirable path.&lt;BR /&gt;&lt;BR /&gt;Do remember it will test all possible paths so that is also a performance consideration.&lt;BR /&gt;&lt;BR /&gt;I wonder what the reason you are moveing away from a quorum disk?  The mean time between failures with new technology is very high, so I don't really see an advantage. The time to move away from Quorum disks is when you have many systems on a SAN, and one node can become the cluster.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 22 Apr 2006 15:24:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974712#M75824</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2006-04-22T15:24:59Z</dc:date>
    </item>
    <item>
      <title>Re: Migrating from Quorum Disk to Quorum Node</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974713#M75825</link>
      <description>The quorum system has been implemented without problem.</description>
      <pubDate>Mon, 24 Apr 2006 10:31:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/migrating-from-quorum-disk-to-quorum-node/m-p/4974713#M75825</guid>
      <dc:creator>Jim Geier_1</dc:creator>
      <dc:date>2006-04-24T10:31:32Z</dc:date>
    </item>
  </channel>
</rss>

