<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: RHEL Cluster Suite in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600861#M40115</link>
    <description>Thanks Matti... I actually found that RHEL docu on Fencing.&lt;BR /&gt;</description>
    <pubDate>Mon, 15 Mar 2010 13:45:08 GMT</pubDate>
    <dc:creator>Alzhy</dc:creator>
    <dc:date>2010-03-15T13:45:08Z</dc:date>
    <item>
      <title>RHEL Cluster Suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600859#M40113</link>
      <description>In a bid to get GFS (to compare with Oracle's OCFS2) -- I noticed that to have GFS on a RHEL system, Cluster Suite is a pre-requisite.&lt;BR /&gt;&lt;BR /&gt;I've installed all the software groups and am about ready to configure via "Conga".&lt;BR /&gt;&lt;BR /&gt;Questions:&lt;BR /&gt;&lt;BR /&gt;* My LUCI node (Conga Mgmt) - can it be one of the nodes?&lt;BR /&gt;* What Fence Device should I configure - is it absolutely necessary&lt;BR /&gt;* Do I need a "private" (heartbeat) network?&lt;BR /&gt;&lt;BR /&gt;I am still reading through docus and would like to get some tips/feedback from those who've already implemented this.&lt;BR /&gt;&lt;BR /&gt;TIA!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 15 Mar 2010 12:35:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600859#M40113</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-03-15T12:35:37Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL Cluster Suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600860#M40114</link>
      <description>* My LUCI node (Conga Mgmt) - can it be one of the nodes?&lt;BR /&gt;&lt;BR /&gt;It *can* be, although if that node is down for maintenance or because of hardware failure, you won't be able to access the Conga management interface.&lt;BR /&gt;&lt;BR /&gt;If you want to do this, consider installing LUCI on all nodes - if your primary admin node fails, it will be reasonably easy to establish LUCI control using some other node.&lt;BR /&gt;&lt;BR /&gt;* What Fence Device should I configure - is it absolutely necessary&lt;BR /&gt;&lt;BR /&gt;Yes, it is necessary. It is the ultimate guarantee against data corruption in case one node loses other connectivity with the rest of the cluster. If you have fault-tolerant heartbeat connections, your cluster may never need to actually *use* fencing - but when building a cluster, you should be planning for the worst.&lt;BR /&gt;&lt;BR /&gt;Your choice of Fence Devices depends on what hardware you're using to run your cluster. If you use HP Proliant hardware, iLO power-switch fencing is the obvious choice. If you have supported SAN switches, you might use SAN fencing. If you have three or more nodes and your storage system supports SCSI persistent reservations, you might use that as your fencing method. If your nodes are Virtual Machines (Xen, VMware or whatever), you can use VM fencing.&lt;BR /&gt;&lt;BR /&gt;If one node in a GFS cluster becomes unreachable, all nodes will stop accessing GFS filesystems until it's known for sure that the unreachable node won't perform any writes. This is necessary, because an unreachable node would be unaware of any GFS locks held by any other node, and vice versa. The locks are essential for GFS filesystem integrity.&lt;BR /&gt;&lt;BR /&gt;If the cluster cannot get positive confirmation that the unreachable node has been fenced out, your GFS filesystem will remain frozen, and your cluster will be no more fault tolerant than a single host. &lt;BR /&gt;&lt;BR /&gt;* Do I need a "private" (heartbeat) network?&lt;BR /&gt;&lt;BR /&gt;RedHat Cluster Suite uses IP Multicasts for heartbeat. There is no requirement that the heartbeats should be isolated to a private network, although you can do it if you wish.&lt;BR /&gt;&lt;BR /&gt;It's more important for the heartbeat network to be fault-tolerant than to be isolated from any other traffic.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Mon, 15 Mar 2010 13:06:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600860#M40114</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-03-15T13:06:49Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL Cluster Suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600861#M40115</link>
      <description>Thanks Matti... I actually found that RHEL docu on Fencing.&lt;BR /&gt;</description>
      <pubDate>Mon, 15 Mar 2010 13:45:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600861#M40115</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-03-15T13:45:08Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL Cluster Suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600862#M40116</link>
      <description>Matti..&lt;BR /&gt;Managed to set up my 2 node cluster... Also have LUCI installed on both, etc...  One node is where I set up the CLuster. When I try to access LUCI on the other node -- I don't see my cluster at all.. Anything I am missing?&lt;BR /&gt;</description>
      <pubDate>Mon, 15 Mar 2010 15:30:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600862#M40116</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-03-15T15:30:00Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL Cluster Suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600863#M40117</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;* My LUCI node (Conga Mgmt) - can it be one of the nodes?&lt;BR /&gt;Yes.&lt;BR /&gt;&lt;BR /&gt;* What Fence Device should I configure - is it absolutely necessary&lt;BR /&gt;Yes. Absolutely necessary. HP server with iLo works good. The APC power supplies are little harder on the hardware but less prone to iLo problems.&lt;BR /&gt;* Do I need a "private" (heartbeat) network?&lt;BR /&gt;No. But it is helpful.&lt;BR /&gt;&lt;BR /&gt;SEP&lt;BR /&gt;102 from Linux Mount Olympus with a penguin on top.</description>
      <pubDate>Mon, 15 Mar 2010 15:49:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600863#M40117</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2010-03-15T15:49:39Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL Cluster Suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600864#M40118</link>
      <description>The LUCI on the node you used to set up your cluster won't automatically replicate its configuration to the other node.&lt;BR /&gt;&lt;BR /&gt;When you need to use LUCI on the "other" node, you will have to use its "Add existing cluster" function to take over the cluster management.&lt;BR /&gt;&lt;BR /&gt;Hmm... perhaps it would be possible to find the necessary certificates and/or other files on the primary LUCI setup, and copy them to the other node. I haven't tried this. &lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Mon, 15 Mar 2010 16:03:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600864#M40118</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-03-15T16:03:08Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL Cluster Suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600865#M40119</link>
      <description>I cannot comment on GFS+ClusterSuite specifically, but for clustering packages, the private network is not always just for heartbeat. In some cases (I *think* GFS/CS is one of them,) there is a DLM (Distributed Lock Manager) process that arbitrates disk I/O to keep nodes from stepping on each other - in some cases, the cluster also uses the designated "private" network for lock traffic, and should be reserved for low-latency operations.</description>
      <pubDate>Mon, 15 Mar 2010 22:40:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel-cluster-suite/m-p/4600865#M40119</guid>
      <dc:creator>macosta</dc:creator>
      <dc:date>2010-03-15T22:40:18Z</dc:date>
    </item>
  </channel>
</rss>

