<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic lock disk vs. quorum server in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520822#M700660</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;just wondering, in a 2 node cluster, what are the advantages of a lock disk over a quorum server and vice versa?&lt;BR /&gt;I've been reading up on both and don't see a clear advantage of one above the other.&lt;BR /&gt;Below is a brief description of the hardware that I can use for this cluster.&lt;BR /&gt;&lt;BR /&gt;Please focus only on this comparison as i can see clearly how a quorum server can be helpfull is you have multiple clusters.&lt;BR /&gt;Also, this is only regarding a 2 node cluster (campus cluster) over 2 sites, all disks on EMC symmetrics and the systems are 2 superdome n-pars.&lt;BR /&gt;&lt;BR /&gt;Thanks for all in advance.&lt;BR /&gt;Emiel</description>
    <pubDate>Fri, 08 Apr 2005 07:27:55 GMT</pubDate>
    <dc:creator>Emiel van Grinsven_1</dc:creator>
    <dc:date>2005-04-08T07:27:55Z</dc:date>
    <item>
      <title>lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520822#M700660</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;just wondering, in a 2 node cluster, what are the advantages of a lock disk over a quorum server and vice versa?&lt;BR /&gt;I've been reading up on both and don't see a clear advantage of one above the other.&lt;BR /&gt;Below is a brief description of the hardware that I can use for this cluster.&lt;BR /&gt;&lt;BR /&gt;Please focus only on this comparison as i can see clearly how a quorum server can be helpfull is you have multiple clusters.&lt;BR /&gt;Also, this is only regarding a 2 node cluster (campus cluster) over 2 sites, all disks on EMC symmetrics and the systems are 2 superdome n-pars.&lt;BR /&gt;&lt;BR /&gt;Thanks for all in advance.&lt;BR /&gt;Emiel</description>
      <pubDate>Fri, 08 Apr 2005 07:27:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520822#M700660</guid>
      <dc:creator>Emiel van Grinsven_1</dc:creator>
      <dc:date>2005-04-08T07:27:55Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520823#M700661</link>
      <description>In discussing the cluster arbitration methods, it is important to understand&lt;BR /&gt;that arbitration to form a new cluster only comes into play when exactly half&lt;BR /&gt;of the nodes in the cluster cannot reach the other half over the heartbeat&lt;BR /&gt;network(s).  This is sometimes called the 50% rule.  Arbitration is not&lt;BR /&gt;involved when a heartbeat outage occurs between 25%/75% or 33%/66% of the&lt;BR /&gt;servers.  In such a case, the minority-side of nodes will reboot themselves.&lt;BR /&gt;&lt;BR /&gt;When heartbeat is lost to one or more servers, the cluster must reform in&lt;BR /&gt;order to insure that potentially failed packages are adopted by legitimately&lt;BR /&gt;operating servers.  The arbitration process insures that one and only one&lt;BR /&gt;cluster will assume responsibility for orphaned packages.  The nodes not&lt;BR /&gt;achieving cluster reformation will reboot themselves to preserve data&lt;BR /&gt;integrity.&lt;BR /&gt;&lt;BR /&gt;Regarding configuring a 4-node cluster with cluster arbitration.  ServiceGuard&lt;BR /&gt;supports any of these methods of course, but here are the pros/cons to the&lt;BR /&gt;various cluster arbitration methods:&lt;BR /&gt;&lt;BR /&gt;Quorum Server&lt;BR /&gt;-------------&lt;BR /&gt;PROS:  Can support up to 50 2-node clusters&lt;BR /&gt;       Failover time shortened because quorum server access is faster than&lt;BR /&gt;        lock disk access&lt;BR /&gt;&lt;BR /&gt;CONS:  Requires a server outside of the cluster be loaded with the quorum&lt;BR /&gt;       server (QS) software (maintenance required).&lt;BR /&gt;&lt;BR /&gt;       Cluster node communication with the quorum server is subject to network&lt;BR /&gt;       failure&lt;BR /&gt;&lt;BR /&gt;       QS's .rhosts must be updated to allow access to cluster nodes (possible&lt;BR /&gt;       security issue).&lt;BR /&gt;&lt;BR /&gt;Single cluster lock disk&lt;BR /&gt;------------------------&lt;BR /&gt;PROS:  Simple to configure - allow cmquerycl to select one&lt;BR /&gt;&lt;BR /&gt;CONS:  Must have a shared VG between all 4 nodes in the cluster&lt;BR /&gt;&lt;BR /&gt;       The single cluster lock disk is a single-point of failure - but it is&lt;BR /&gt;       only a vulnerability if it is unavailable when the 50% rule comes into&lt;BR /&gt;       play.&lt;BR /&gt;&lt;BR /&gt;       Lock disk access (disk-I/O) is slower than quorum server(network-based)&lt;BR /&gt;&lt;BR /&gt;Dual cluster lock disk&lt;BR /&gt;----------------------&lt;BR /&gt;PROS:  Recommended in scenario where 50% of the nodes are in a different&lt;BR /&gt;       location than the other 50% of the nodes - providing for site failure.&lt;BR /&gt;&lt;BR /&gt;CONS:  Could cause split brain clusters if only the heartbeat network fails&lt;BR /&gt;       and not one of the 2 sites.  (DATA CORRUPTION POSSIBLE)&lt;BR /&gt;&lt;BR /&gt;No arbitration&lt;BR /&gt;--------------&lt;BR /&gt;PROS:  easy to configure (?)&lt;BR /&gt;&lt;BR /&gt;CONS:  All nodes reboot when a 50/50 split occurs in the heartbeat net.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;A single lock disk is still the preferred choice because it doesn't involve the split-brain possibility of a dual-lock and requires no network access (unless it's in a SAN), and it is configured by ServiceGuard without additional software loads such as Quorum Server.  However, the other two methods serve their purpose when their method makes more sense.&lt;BR /&gt;&lt;BR /&gt;NOTES:&lt;BR /&gt;PLEASE refer to the Managing MC/ServiceGuard manual for details about each arbitration method.&lt;BR /&gt;</description>
      <pubDate>Fri, 08 Apr 2005 07:31:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520823#M700661</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2005-04-08T07:31:40Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520824#M700662</link>
      <description>&lt;BR /&gt;Emiel,&lt;BR /&gt;&lt;BR /&gt;You can put your quorum server anywhere on the network. You can put it on the clients site, in their network if you like.&lt;BR /&gt;&lt;BR /&gt;This way, the package will always run on the right node - the one that your clients can access.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;David de Beer.</description>
      <pubDate>Fri, 08 Apr 2005 07:34:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520824#M700662</guid>
      <dc:creator>David de Beer</dc:creator>
      <dc:date>2005-04-08T07:34:21Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520825#M700663</link>
      <description>I would say, a lock disk. Why?&lt;BR /&gt;Ease of management.&lt;BR /&gt;No extra work for quorum server.&lt;BR /&gt;&lt;BR /&gt;As you understood, if there are more than 1 clusters, I would go  for quorum server.</description>
      <pubDate>Fri, 08 Apr 2005 07:35:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520825#M700663</guid>
      <dc:creator>RAC_1</dc:creator>
      <dc:date>2005-04-08T07:35:56Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520826#M700664</link>
      <description>Sometimes the speed of the itrc amazes me ;-)&lt;BR /&gt;Thank you all for your fast answers!&lt;BR /&gt;&lt;BR /&gt;First I have to point out that if a lock disk will be chosen, it will be a dual lock disk for obvious reasons.&lt;BR /&gt;&lt;BR /&gt;Ok, yes i agree.&lt;BR /&gt;A quorum server does have an advantage that focusses on the future if more clusters would come (and they will).&lt;BR /&gt;&lt;BR /&gt;Network simply is not an issue.&lt;BR /&gt;&lt;BR /&gt;I'm leaning towards using lock disk because of the ease of management. I'm familiar with the statistically worse uptime of cluster systems and i see a quorum server as 1 more system that can become a SPOF at the worse possible time.&lt;BR /&gt;&lt;BR /&gt;Are there basic differences the effect?&lt;BR /&gt;let me rephrase that.. can we think of scenarios that 1 would be a huge advantage above the other?&lt;BR /&gt;&lt;BR /&gt;let me point out again that all the HW is redundant. Everything is behind UPS. lock disks are on san. Multiple HB lan's will be used.</description>
      <pubDate>Fri, 08 Apr 2005 07:54:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520826#M700664</guid>
      <dc:creator>Emiel van Grinsven_1</dc:creator>
      <dc:date>2005-04-08T07:54:24Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520827#M700665</link>
      <description>well my comment here is that if you go the dual cluster lock scenario, you could end up in a split brain scenario. If you have a QS somewhere else on your lan, then if half of the nodes lost contact with the other half, i.e. a 50% split, one side may not be able to see any netaorking, and hence would not be able to get the QS, and hence you would not get split-brain.&lt;BR /&gt;Also, if you decide to change your disc configuration, you could be forced to halt the cluster to re-apply a new cluster lock disc.&lt;BR /&gt;&lt;BR /&gt;The QS can be made HA, by having a spearate little 2 node cluster running QS as a package.&lt;BR /&gt;There are a large number of factors to be considered here. As mentioned, reading the manual may help you to make an informed choice.&lt;BR /&gt;</description>
      <pubDate>Fri, 08 Apr 2005 08:42:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520827#M700665</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2005-04-08T08:42:52Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520828#M700666</link>
      <description>Frankly, Stephen has explained all the pro/con points.&lt;BR /&gt;&lt;BR /&gt;So, on small clusters, it's six of one and a half dozen of the other.  If you looking at going above 4 servers, then QS.&lt;BR /&gt;&lt;BR /&gt;I'll be honest I have kept my lock disk all this time, cause I didn't want to bother putting up QS.  Well now I'm in a situation where I will have more than 4 servers (for awhile anyway) in one cluster and I have to put in QS.&lt;BR /&gt;Dah...this is so simple to set up, why did I bother to wait.  &lt;BR /&gt;&lt;BR /&gt;For bad scenarios, you've already read on the split-brain issue, so there you go.&lt;BR /&gt;&lt;BR /&gt;Just my 2cents,&lt;BR /&gt;rcw&lt;BR /&gt;</description>
      <pubDate>Fri, 08 Apr 2005 08:58:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520828#M700666</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2005-04-08T08:58:13Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520829#M700667</link>
      <description>I'd let every cluster have a cluster lock disk and let all clusters share a quorum server, so that the cluster nodes will be able to come up during either network or san connectivity loss.&lt;BR /&gt;(You might not get a running cluster package, but the cluster is formed and ready to go as soon as the other issues are resolved, saving uptime)&lt;BR /&gt;any old D- or A-class will do fine as a quorum server, and I think there should be one in every household ;)</description>
      <pubDate>Fri, 08 Apr 2005 09:44:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520829#M700667</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-04-08T09:44:25Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520830#M700668</link>
      <description>Thanks to all for the answers and thoughts.&lt;BR /&gt;&lt;BR /&gt;this time the application is not that critical and we only use an HA environment because when a site really has a disaster you want to have as much as possible automated so that you can concentrate on something else than starting software.&lt;BR /&gt;&lt;BR /&gt;I still didn't decide yet (on my advise that is, I don't pretend to make any decisions ;-) ) but considering that a split-brain is absolutely not an option I would say that&lt;BR /&gt;a quorum server on a 3rd location is the way to go. Still with SG configured in a way that cluster activity is not automatically started after reboot. There's always somebody standby so that should cover it in case something happens when at the same time the QS is unavailable.&lt;BR /&gt;&lt;BR /&gt;I would like to thank everybody for their thoughts and experiences. Sometimes just talking about issues makes them much more clear.</description>
      <pubDate>Fri, 08 Apr 2005 10:18:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520830#M700668</guid>
      <dc:creator>Emiel van Grinsven_1</dc:creator>
      <dc:date>2005-04-08T10:18:33Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520831#M700669</link>
      <description>The quorum server software is really very easy to install. I have actually had my quorum server running on 4 different systems. I have moved it when I knew that the system the quorum server was on was going to have maintenance. I also moved it to a newer, more reliable system that we installed. I did all this without taking the cluster down. When I first installed the quorum server, I added another IP address and host name to the server. So I "float" the IP address and host name to the "new" system when I move it.&lt;BR /&gt;&lt;BR /&gt;Also, my quorum server is running on HP-UX 11.11 and the cluster is 11.0.&lt;BR /&gt;&lt;BR /&gt;Marlou</description>
      <pubDate>Tue, 12 Apr 2005 15:07:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520831#M700669</guid>
      <dc:creator>Marlou Everson</dc:creator>
      <dc:date>2005-04-12T15:07:17Z</dc:date>
    </item>
    <item>
      <title>Re: lock disk vs. quorum server</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520832#M700670</link>
      <description>Hi, here my two cents...&lt;BR /&gt;&lt;BR /&gt;We have a 2 node cluster with 2 Hitachi cabinets, on 2 different buildings. We have chosen the "single disk into the 1st Hitachi bay" option. I personally do not like very much this option, because if the 1st site gets burnt, the second server would be technically able to do all the job, but it has no access to the lock disk, and I'd have to manually restart the cluster and forcing the remaining node to form the cluster without lock disk. And all that, a sunday 3 a.m. which is, typically, the time when such these things happen.&lt;BR /&gt;But, all of this is much better than getting data inconsistency...I prefer to spend a while on a Sunday, than a "bigger while" recovering data from...hey! we started backing up the new cluster, didn't we?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 10 May 2005 08:19:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lock-disk-vs-quorum-server/m-p/3520832#M700670</guid>
      <dc:creator>Roberto Martinez_6</dc:creator>
      <dc:date>2005-05-10T08:19:23Z</dc:date>
    </item>
  </channel>
</rss>

