<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745005#M660286</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Hmmmm, yes the cluster lock and quorum server are the two methodologies that help in "cluster reformation" when one of the nodes fail. The cmcld always checks for 100% node attendance and if heartbeat messages are not being received within node_timeout then communication is lost between the nodes which divides the system into 2 "seperate subclusters" (in case of a 2 node cluster). And as of my knowledge this is the most momentous requirement for a "cluster lock" or a "quorum server" as it helps in "cluster reformation". Only difference being the quorum server is outside the serviceguard environment for the cluster we are concerned with. "Tie breaker" is one of the terms used when you talk about this concept but  few HPUX administrators don't like that term. :)&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Ismail Azad</description>
    <pubDate>Sun, 30 Jan 2011 16:37:58 GMT</pubDate>
    <dc:creator>Ismail Azad</dc:creator>
    <dc:date>2011-01-30T16:37:58Z</dc:date>
    <item>
      <title>Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744996#M660275</link>
      <description>I have 2 nodes right now in my cluster. Another host is the quorum server. The quorum server was taken down to resolve another issue on it (its still down). The cluster and nodes are up and running!&lt;BR /&gt;Only node 1 has a package running. Im want to add a configure a new package on second node.&lt;BR /&gt;With the cluster up, etc, when i run checkconf the new package, will it complain about the quorum server being down?? Will it allow me to proceed and 'apply' my package?</description>
      <pubDate>Thu, 27 Jan 2011 18:35:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744996#M660275</guid>
      <dc:creator>Tom Haddad</dc:creator>
      <dc:date>2011-01-27T18:35:33Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744997#M660277</link>
      <description>well if you are only adding a package, it should work. It is when you go to modify thye cluster itself that there may be an issue.&lt;BR /&gt;If all else fails, why not quickly install the QS software on another server, set it up, and add the origional QS ip address using ifconfig lanx:1 &lt;QS ip="" address=""&gt; and verify the cluster can see this is now the qs?&lt;/QS&gt;</description>
      <pubDate>Thu, 27 Jan 2011 18:45:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744997#M660277</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2011-01-27T18:45:59Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744998#M660279</link>
      <description>Yes you can do this.  You can make any online changes without the quorum server running.&lt;BR /&gt;The only time you will have a problem is if the cluster goes 'down', then it will require the Quorum server to come backup.  Or as Melvyn suggests, you could set up another device as a quorum server.&lt;BR /&gt;&lt;BR /&gt;Rgrds,&lt;BR /&gt;Rita</description>
      <pubDate>Thu, 27 Jan 2011 19:47:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744998#M660279</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2011-01-27T19:47:43Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744999#M660280</link>
      <description>&lt;BR /&gt;IF you have LVM volume groups being used by packages why dont you configure a lock disk or a locklun if you have a unused lun.&lt;BR /&gt;&lt;BR /&gt;A lock disk costs nothing if you have LVM volume groups.&lt;BR /&gt;&lt;BR /&gt;A 2 node cluster is risky if your quorum server is down for a long time.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Jan 2011 00:04:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4744999#M660280</guid>
      <dc:creator>Emil Velez</dc:creator>
      <dc:date>2011-01-28T00:04:51Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745000#M660281</link>
      <description>Rita, a quorum server is not required to start up a cluster. Cluster lock is only required when arbitration is required after a failure. It is never required when starting things up.</description>
      <pubDate>Fri, 28 Jan 2011 09:06:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745000#M660281</guid>
      <dc:creator>John Bigg</dc:creator>
      <dc:date>2011-01-28T09:06:06Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745001#M660282</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;The Quorum server is what provides arbitration when bringing a cluster up, be it as a result of first turning it on or bringing it back up after a failure.  At least that is how I read the following from the QS ver 4 document page 9:&lt;BR /&gt;&lt;BR /&gt;"If the Quorum Server is not available or reachable, it will not adversely affect any clusters using&lt;BR /&gt;it, unless a cluster needs to reform and requires the Quorum Serverâ  s arbitration to do so.&lt;BR /&gt;As of Serviceguard A.11.19, you can change from one quorum server to another, or to or from&lt;BR /&gt;another quorum method, while the cluster is running."&lt;BR /&gt;&lt;BR /&gt;Since Tom is at version 11.19, he could make the change to a different QS, as was mentioned, and he can make changes to his running cluster.  But if he goes down, then he needs QS to get back up.&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;Rita&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Jan 2011 12:37:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745001#M660282</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2011-01-28T12:37:32Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745002#M660283</link>
      <description>ok..&lt;BR /&gt;1. So i can shutdown (not a failure) one of the nodes and the cluster will remain up? (should!!)&lt;BR /&gt;&lt;BR /&gt;2. I read the 11.19 guide but didnt see where you can change to another quorum server online??!!&lt;BR /&gt;&lt;BR /&gt;3. We are looking to implement Quorum Server on a Linux host. I can halt everything and reconfigure (cmquerycl -q  quorumserver....) using a new quorum name.&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Jan 2011 14:02:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745002#M660283</guid>
      <dc:creator>Tom Haddad</dc:creator>
      <dc:date>2011-01-28T14:02:51Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745003#M660284</link>
      <description>You can put it on Linux, or even on a simple workstation.  Check and see if QS is already loaded on something close by already.&lt;BR /&gt;&lt;BR /&gt;Rgrds,&lt;BR /&gt;Rita</description>
      <pubDate>Fri, 28 Jan 2011 14:07:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745003#M660284</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2011-01-28T14:07:55Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745004#M660285</link>
      <description>Rita&lt;BR /&gt;"The Quorum server is what provides arbitration when bringing a cluster up, be it as a result of first turning it on or bringing it back up after a failure. At least that is how I read the following from the QS ver 4 document page 9: "&lt;BR /&gt;&lt;BR /&gt;when a cluster is first started by doing a cmruncl, NO arbitration "device" (be it Cluster Lock Disk, LockLun or Quorum Server)is required.&lt;BR /&gt;The ONLY time an arbitration device is required is when a FAILURE is experienced, and this results in there being EXACTLY 50% left to try to re-form as a cluster.&lt;BR /&gt;&lt;BR /&gt;This is a good document to refer to:&lt;BR /&gt;&lt;A href="http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02056095/c02056095.pdf" target="_blank"&gt;http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02056095/c02056095.pdf&lt;/A&gt;</description>
      <pubDate>Sat, 29 Jan 2011 13:55:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745004#M660285</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2011-01-29T13:55:51Z</dc:date>
    </item>
    <item>
      <title>Re: Serviceguard 11.19 on 2 Itanium 11iv3 nodes - quorum</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745005#M660286</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Hmmmm, yes the cluster lock and quorum server are the two methodologies that help in "cluster reformation" when one of the nodes fail. The cmcld always checks for 100% node attendance and if heartbeat messages are not being received within node_timeout then communication is lost between the nodes which divides the system into 2 "seperate subclusters" (in case of a 2 node cluster). And as of my knowledge this is the most momentous requirement for a "cluster lock" or a "quorum server" as it helps in "cluster reformation". Only difference being the quorum server is outside the serviceguard environment for the cluster we are concerned with. "Tie breaker" is one of the terms used when you talk about this concept but  few HPUX administrators don't like that term. :)&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Ismail Azad</description>
      <pubDate>Sun, 30 Jan 2011 16:37:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/serviceguard-11-19-on-2-itanium-11iv3-nodes-quorum/m-p/4745005#M660286</guid>
      <dc:creator>Ismail Azad</dc:creator>
      <dc:date>2011-01-30T16:37:58Z</dc:date>
    </item>
  </channel>
</rss>

