<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: connecting ilo2 in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561961#M39307</link>
    <description>Yes you do. In a two-node cluster without a quorum disk, there is a risk of a fencing race:&lt;BR /&gt;&lt;BR /&gt;If all heartbeat connections between the nodes are broken for some reason, both nodes will think: "I'm running, therefore I am fine. The other node just stopped sending heartbeats, so it may or may not have failed, but is certainly unreachable. I must fence it and take over its services." &lt;BR /&gt;&lt;BR /&gt;Each node will try to fence the other one. This is an unstable situation where luck is a factor - and therefore it is not at all desirable in a high-availability cluster.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The quorum disk will provide the "third opinion", breaking the tie. The quorum disk daemons in the nodes will sort out the situation, and will give their vote(s) to only one or the other half of the split cluster. &lt;BR /&gt;&lt;BR /&gt;In addition, the quorum disk daemon can perform additional tests with external targets to decide which half of the cluster seems more functional. When properly configured, the quorum disk system will ensure that the half of the cluster that has been isolated by the failure will get voted out (and fenced, just to be sure).&lt;BR /&gt;&lt;BR /&gt;In a three-node cluster, a quorum disk configuration is not essential. But a three-node cluster becomes a two-node cluster every time you shut down one of the nodes for maintenance (pre-scheduled or otherwise). If one of the two remaining nodes fails just then, a quorum disk would still be useful in making sure the cluster behaves in a predictable fashion.&lt;BR /&gt;&lt;BR /&gt;In a cluster with four or more nodes, the probability of exactly half the cluster losing all connectivity with the other half should be very small, if you've designed your cluster network connections right. You can still use a quorum disk if you feel your configuration requires it, but that would be a special case.&lt;BR /&gt;&lt;BR /&gt;MK</description>
    <pubDate>Tue, 12 Jan 2010 00:05:36 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2010-01-12T00:05:36Z</dc:date>
    <item>
      <title>connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561958#M39304</link>
      <description>Hi, &lt;BR /&gt;&lt;BR /&gt;i want to configure rhel clustering, i am using rhel 5.3, HP servers and HP storage. i need to know,  how do i connect ilo2 between nodes?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 11 Jan 2010 13:00:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561958#M39304</guid>
      <dc:creator>mammadshah</dc:creator>
      <dc:date>2010-01-11T13:00:04Z</dc:date>
    </item>
    <item>
      <title>Re: connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561959#M39305</link>
      <description>Welcome to ITRC Forums!&lt;BR /&gt;&lt;BR /&gt;The iLO2 interface of each node should be reachable from the regular network interfaces of all the other nodes. It is also useful if the sysadmin can use the iLO2 connections for remote management purposes. &lt;BR /&gt;&lt;BR /&gt;On the other hand, you should design your network so that a single fault can never break *all* the network connections between the cluster nodes. &lt;BR /&gt;&lt;BR /&gt;In a two-node cluster it might be possible to use crossover cables to connect iLO of node A to a regular NIC of node B and vice versa, and dedicate one regular NIC on each node for fencing use only. However, this configuration makes it very difficult to add new nodes to the cluster, so it isn't recommended.&lt;BR /&gt;&lt;BR /&gt;How many nodes does your cluster have, and what can you tell about your network configuration?&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Mon, 11 Jan 2010 14:38:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561959#M39305</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-01-11T14:38:42Z</dc:date>
    </item>
    <item>
      <title>Re: connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561960#M39306</link>
      <description>i want to install oracle on two node cluster, do i need to have quorum disk ?&lt;BR /&gt;and what if i need to add new HP server in cluster.&lt;BR /&gt;&lt;BR /&gt;thanks.&lt;BR /&gt;</description>
      <pubDate>Mon, 11 Jan 2010 16:05:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561960#M39306</guid>
      <dc:creator>mammadshah</dc:creator>
      <dc:date>2010-01-11T16:05:10Z</dc:date>
    </item>
    <item>
      <title>Re: connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561961#M39307</link>
      <description>Yes you do. In a two-node cluster without a quorum disk, there is a risk of a fencing race:&lt;BR /&gt;&lt;BR /&gt;If all heartbeat connections between the nodes are broken for some reason, both nodes will think: "I'm running, therefore I am fine. The other node just stopped sending heartbeats, so it may or may not have failed, but is certainly unreachable. I must fence it and take over its services." &lt;BR /&gt;&lt;BR /&gt;Each node will try to fence the other one. This is an unstable situation where luck is a factor - and therefore it is not at all desirable in a high-availability cluster.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The quorum disk will provide the "third opinion", breaking the tie. The quorum disk daemons in the nodes will sort out the situation, and will give their vote(s) to only one or the other half of the split cluster. &lt;BR /&gt;&lt;BR /&gt;In addition, the quorum disk daemon can perform additional tests with external targets to decide which half of the cluster seems more functional. When properly configured, the quorum disk system will ensure that the half of the cluster that has been isolated by the failure will get voted out (and fenced, just to be sure).&lt;BR /&gt;&lt;BR /&gt;In a three-node cluster, a quorum disk configuration is not essential. But a three-node cluster becomes a two-node cluster every time you shut down one of the nodes for maintenance (pre-scheduled or otherwise). If one of the two remaining nodes fails just then, a quorum disk would still be useful in making sure the cluster behaves in a predictable fashion.&lt;BR /&gt;&lt;BR /&gt;In a cluster with four or more nodes, the probability of exactly half the cluster losing all connectivity with the other half should be very small, if you've designed your cluster network connections right. You can still use a quorum disk if you feel your configuration requires it, but that would be a special case.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Tue, 12 Jan 2010 00:05:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561961#M39307</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-01-12T00:05:36Z</dc:date>
    </item>
    <item>
      <title>Re: connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561962#M39308</link>
      <description>Gr8, &lt;BR /&gt;How do i create Quorum disk? do i create 500MB partiton on SAN, will it work as quorum disk?&lt;BR /&gt;</description>
      <pubDate>Tue, 12 Jan 2010 07:00:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561962#M39308</guid>
      <dc:creator>mammadshah</dc:creator>
      <dc:date>2010-01-12T07:00:44Z</dc:date>
    </item>
    <item>
      <title>Re: connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561963#M39309</link>
      <description>The quorum disk must be accessible by all the nodes, so a SAN storage LUN is exactly what is needed. It does not need to be big: 10 MB is the minimum required size. 500 MB is more than enough.&lt;BR /&gt;&lt;BR /&gt;Please see:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/s1-qdisk-considerations-CA.html" target="_blank"&gt;http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/s1-qdisk-considerations-CA.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;man qdiskd&lt;BR /&gt;man 5 qdisk&lt;BR /&gt;man mkqdisk&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Tue, 12 Jan 2010 07:37:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561963#M39309</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-01-12T07:37:28Z</dc:date>
    </item>
    <item>
      <title>Re: connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561964#M39310</link>
      <description>gr8 help,&lt;BR /&gt;&lt;BR /&gt;i have 4 nodes and a SAN, and i want to setup 2 clusters, each 2 nodes will be grouped serving a sevice.&lt;BR /&gt;&lt;BR /&gt;Is it necessary to have 2 Q-disk on a SAN, or I have to create 2 Q-disks on SAN.&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Tue, 12 Jan 2010 08:27:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561964#M39310</guid>
      <dc:creator>mammadshah</dc:creator>
      <dc:date>2010-01-12T08:27:09Z</dc:date>
    </item>
    <item>
      <title>Re: connecting ilo2</title>
      <link>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561965#M39311</link>
      <description>no, a single quorum disk is enough for all the clusters.</description>
      <pubDate>Tue, 12 Jan 2010 17:21:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/connecting-ilo2/m-p/4561965#M39311</guid>
      <dc:creator>Viktor Balogh</dc:creator>
      <dc:date>2010-01-12T17:21:30Z</dc:date>
    </item>
  </channel>
</rss>

