<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic service fail over will behave w/o a quorum disk in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/service-fail-over-will-behave-w-o-a-quorum-disk/m-p/4422818#M36642</link>
    <description>&lt;BR /&gt;Red Hat Enterprise Linux Server release 5.1&lt;BR /&gt;Red hat cluster suite 5.1&lt;BR /&gt;&lt;BR /&gt;I have  a two node cluster. &lt;BR /&gt;  Member Name                        ID   Status&lt;BR /&gt;  ------ ----                        ---- ------&lt;BR /&gt;  afsdl1p                               1 Online, rgmanager&lt;BR /&gt;  afsdl2p                               2 Online, Local, rgmanager&lt;BR /&gt;&lt;BR /&gt;  Service Name         Owner (Last)                   State&lt;BR /&gt;  ------- ----         ----- ------                   -----&lt;BR /&gt;  service:afspkg1p     afsdl1p                        started&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;We had the Quorum disk , but forced to remove from the cluster as we were continuously getting "Cluster is not quorate.  Refusing connectionMember" and not able to bring up the service. &lt;BR /&gt;&lt;BR /&gt;My concern is how the cluster reform or automatic service fail over will behave w/o a quorum disk.?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Tue, 19 May 2009 16:13:44 GMT</pubDate>
    <dc:creator>skt_skt</dc:creator>
    <dc:date>2009-05-19T16:13:44Z</dc:date>
    <item>
      <title>service fail over will behave w/o a quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-linux/service-fail-over-will-behave-w-o-a-quorum-disk/m-p/4422818#M36642</link>
      <description>&lt;BR /&gt;Red Hat Enterprise Linux Server release 5.1&lt;BR /&gt;Red hat cluster suite 5.1&lt;BR /&gt;&lt;BR /&gt;I have  a two node cluster. &lt;BR /&gt;  Member Name                        ID   Status&lt;BR /&gt;  ------ ----                        ---- ------&lt;BR /&gt;  afsdl1p                               1 Online, rgmanager&lt;BR /&gt;  afsdl2p                               2 Online, Local, rgmanager&lt;BR /&gt;&lt;BR /&gt;  Service Name         Owner (Last)                   State&lt;BR /&gt;  ------- ----         ----- ------                   -----&lt;BR /&gt;  service:afspkg1p     afsdl1p                        started&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;We had the Quorum disk , but forced to remove from the cluster as we were continuously getting "Cluster is not quorate.  Refusing connectionMember" and not able to bring up the service. &lt;BR /&gt;&lt;BR /&gt;My concern is how the cluster reform or automatic service fail over will behave w/o a quorum disk.?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 19 May 2009 16:13:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/service-fail-over-will-behave-w-o-a-quorum-disk/m-p/4422818#M36642</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-05-19T16:13:44Z</dc:date>
    </item>
    <item>
      <title>Re: service fail over will behave w/o a quorum disk</title>
      <link>https://community.hpe.com/t5/operating-system-linux/service-fail-over-will-behave-w-o-a-quorum-disk/m-p/4422819#M36643</link>
      <description>Without quorum disk, the configured fence devices must sucess to be able to failover the service.&lt;BR /&gt;&lt;BR /&gt;In case of a "split brain" situation, if none of the servers are able to fence the other one, then the service will not perform a failover.</description>
      <pubDate>Tue, 19 May 2009 19:19:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/service-fail-over-will-behave-w-o-a-quorum-disk/m-p/4422819#M36643</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2009-05-19T19:19:14Z</dc:date>
    </item>
  </channel>
</rss>

