<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: red hat cluster failback in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702736#M42462</link>
    <description>since your domain is ordered what is the priority you have gievn to the nodes.</description>
    <pubDate>Thu, 21 Oct 2010 12:37:28 GMT</pubDate>
    <dc:creator>AnthonySN</dc:creator>
    <dc:date>2010-10-21T12:37:28Z</dc:date>
    <item>
      <title>red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702734#M42460</link>
      <description>hi , &lt;BR /&gt;we have installed 2 node red hat cluster on hp blade &lt;BR /&gt;Linux mango2 2.6.18-53.el5 #1 SMP Wed Oct 10 16:34:19 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux&lt;BR /&gt;cluster is up and working , but we want to disable failback.&lt;BR /&gt;i.e in case of server1 running service1 and associated FS going down,&lt;BR /&gt;node server2 should own the service and the mount point(this is tested ok)&lt;BR /&gt;But when the Server1 coming back , the service and mount point should not go back to system1.&lt;BR /&gt;we alredy did this in cluster.conf file :&lt;BR /&gt;&lt;FAILOVERDOMAIN name="rh1torh2" ordered="1" nofailback="1" restricted="1"&gt;&lt;BR /&gt;But still the failback is happening.&lt;BR /&gt;Regards,&lt;/FAILOVERDOMAIN&gt;</description>
      <pubDate>Thu, 21 Oct 2010 09:55:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702734#M42460</guid>
      <dc:creator>DeafFrog</dc:creator>
      <dc:date>2010-10-21T09:55:44Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702735#M42461</link>
      <description>goto Modify Failover Domain&lt;BR /&gt;&lt;BR /&gt;To enable or disable failback in a failover domain, click the checkbox next to Do not fail back services in this domain. With Do not fail back services in this domain checked, if a service fails over from a preferred node, the service does not fail back to the original node once it has recovered. &lt;BR /&gt;</description>
      <pubDate>Thu, 21 Oct 2010 11:53:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702735#M42461</guid>
      <dc:creator>AnthonySN</dc:creator>
      <dc:date>2010-10-21T11:53:52Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702736#M42462</link>
      <description>since your domain is ordered what is the priority you have gievn to the nodes.</description>
      <pubDate>Thu, 21 Oct 2010 12:37:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702736#M42462</guid>
      <dc:creator>AnthonySN</dc:creator>
      <dc:date>2010-10-21T12:37:28Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702737#M42463</link>
      <description>hi SASJ, &lt;BR /&gt;&lt;BR /&gt;priority means votes ? ....here's the clusfer.conf file...&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;CLUSTER config_version="11" name="comcluster"&gt;&lt;BR /&gt;        &lt;FENCE_DAEMON post_fail_delay="0" post_join_delay="3"&gt;&lt;/FENCE_DAEMON&gt;&lt;BR /&gt;        &lt;CLUSTERNODES&gt;&lt;BR /&gt;                &lt;CLUSTERNODE name="mango1" nodeid="1" votes="1"&gt;&lt;BR /&gt;                        &lt;FENCE&gt;&lt;BR /&gt;                                &lt;METHOD name="1"&gt;&lt;BR /&gt;                                        &lt;DEVICE name="ilo-rhel1"&gt;&lt;/DEVICE&gt;&lt;BR /&gt;                                &lt;/METHOD&gt;&lt;BR /&gt;                        &lt;/FENCE&gt;&lt;BR /&gt;                &lt;/CLUSTERNODE&gt;&lt;BR /&gt;                &lt;CLUSTERNODE name="mango2" nodeid="2" votes="1"&gt;&lt;BR /&gt;                        &lt;FENCE&gt;&lt;BR /&gt;                                &lt;METHOD name="1"&gt;&lt;BR /&gt;                                        &lt;DEVICE name="ilo-rhel2"&gt;&lt;/DEVICE&gt;&lt;BR /&gt;                                &lt;/METHOD&gt;&lt;BR /&gt;                        &lt;/FENCE&gt;&lt;BR /&gt;                &lt;/CLUSTERNODE&gt;&lt;BR /&gt;        &lt;/CLUSTERNODES&gt;&lt;BR /&gt;        &lt;CMAN expected_votes="1" two_node="1"&gt;&lt;/CMAN&gt;&lt;BR /&gt;        &lt;FENCEDEVICES&gt;&lt;BR /&gt;                &lt;FENCEDEVICE agent="fence_ilo" hostname="ilo-rhel1" login="iloadmin" name="ilo-rhel1" passwd="imtac123"&gt;&lt;/FENCEDEVICE&gt;&lt;BR /&gt;                &lt;FENCEDEVICE agent="fence_ilo" hostname="ilo-rhel2" login="iloadmin" name="ilo-rhel2" passwd="imtac123"&gt;&lt;/FENCEDEVICE&gt;&lt;BR /&gt;        &lt;/FENCEDEVICES&gt;&lt;BR /&gt;        &lt;RM&gt;&lt;BR /&gt;                &lt;FAILOVERDOMAINS&gt;&lt;BR /&gt;                        &lt;FAILOVERDOMAIN name="rh1torh2" ordered="1" nofailback="1" restricted="1"&gt;&lt;BR /&gt;                                &lt;FAILOVERDOMAINNODE name="mango1" priority="1"&gt;&lt;/FAILOVERDOMAINNODE&gt;&lt;BR /&gt;                                &lt;FAILOVERDOMAINNODE name="mango2" priority="2"&gt;&lt;/FAILOVERDOMAINNODE&gt;&lt;BR /&gt;                        &lt;/FAILOVERDOMAIN&gt;&lt;BR /&gt;                &lt;/FAILOVERDOMAINS&gt;&lt;BR /&gt;                &lt;RESOURCES&gt;&lt;BR /&gt;                        &lt;IP address="10.0.10.1" monitor_link="1"&gt;&lt;/IP&gt;&lt;BR /&gt;                        &lt;FS device="/dev/mapper/comvg-comvglv1" force_fsck="1" force_unmount="1" fsid="43265" fstype="ext2" m=""&gt;&lt;/FS&gt;ountpoint="/u01" name="comfs" options="" self_fence="0"/&amp;gt;&lt;BR /&gt;                        &amp;lt;script file="/u01/scripts/comm1.server" name="comscript"/&amp;gt;&lt;BR /&gt;                &lt;/RESOURCES&gt;&lt;BR /&gt;                &lt;SERVICE autostart="1" domain="rh1torh2" exclusive="1" name="service1" recovery="disable"&gt;&lt;BR /&gt;                        &lt;IP ref="10.0.10.1"&gt;&lt;/IP&gt;&lt;BR /&gt;                        &lt;FS ref="comfs"&gt;&lt;/FS&gt;&lt;BR /&gt;                        &amp;lt;script ref="comscript"/&amp;gt;&lt;BR /&gt;                &lt;/SERVICE&gt;&lt;BR /&gt;        &lt;/RM&gt;&lt;BR /&gt;&lt;/CLUSTER&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Regards,</description>
      <pubDate>Thu, 21 Oct 2010 13:03:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702737#M42463</guid>
      <dc:creator>DeafFrog</dc:creator>
      <dc:date>2010-10-21T13:03:55Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702738#M42464</link>
      <description>Hmm.. did you hand edit the cluster.conf file? Maybe try again using COnga instead?&lt;BR /&gt;&lt;BR /&gt;I must admit I have never attempted CLI or modified the conf file for any cluster changes I make. I always use Conga - much easier and propagates changes and puts them into effect seamlessly.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Oct 2010 17:40:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702738#M42464</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2010-10-21T17:40:08Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702739#M42465</link>
      <description>I agree with Alzhy: you should *never* edit the actual /etc/cluster/cluster.conf while the cluster is running.&lt;BR /&gt;&lt;BR /&gt;Instead, if you want to edit the file, you should do this:&lt;BR /&gt;&lt;BR /&gt;cp /etc/cluster/cluster.conf /tmp/cluster.conf.new&lt;BR /&gt;&lt;BR /&gt;&lt;EDIT&gt;&lt;BR /&gt;&lt;BR /&gt;NOTE: remember to increment the config_version number at the top of the file. For example, if it originally says 'config_version="11"', change it to 'config_version="12"'.&lt;BR /&gt;&lt;BR /&gt;Once you've made your changes to /tmp/cluster.conf.new, run this command to update the new configuration in a synchronized fashion to all cluster members:&lt;BR /&gt;&lt;BR /&gt;ccs_tool update /tmp/cluster.conf.new&lt;BR /&gt;&lt;BR /&gt;I've used this procedure and it works well. It was even recommended by RedHat Cluster and Storage course instructor.&lt;BR /&gt;&lt;BR /&gt;MK&lt;/EDIT&gt;</description>
      <pubDate>Thu, 21 Oct 2010 20:51:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702739#M42465</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-10-21T20:51:35Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702740#M42466</link>
      <description>Dear Alzy and Matti,  &lt;BR /&gt;Thanks for the valuable inputs.I have heard of conga, but if &lt;BR /&gt;i install the cong rpm , presumably there would be some gui &lt;BR /&gt;option of conga  that will disable failback.Is that correct ?&lt;BR /&gt;&lt;BR /&gt;Some one told me that red hat does not have "disable failback"&lt;BR /&gt;option as HP SG , an alternate way would be to disable the rgmanager&lt;BR /&gt;service in primary node so that after failover , when the service tries to &lt;BR /&gt;failback, it doesn't have a node to do so.Pls correct my understanding&lt;BR /&gt;</description>
      <pubDate>Fri, 22 Oct 2010 03:47:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702740#M42466</guid>
      <dc:creator>DeafFrog</dc:creator>
      <dc:date>2010-10-22T03:47:26Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702741#M42467</link>
      <description>Yes i hand-edited the cluster.conf file,thanks Matti for the input.&lt;BR /&gt;&lt;BR /&gt;Regards,</description>
      <pubDate>Fri, 22 Oct 2010 03:49:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702741#M42467</guid>
      <dc:creator>DeafFrog</dc:creator>
      <dc:date>2010-10-22T03:49:38Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702742#M42468</link>
      <description>is it working now?&lt;BR /&gt;you can also use system-config-cluster gui.&lt;BR /&gt;try changing the ordered to 0 and check the failover and failback.</description>
      <pubDate>Fri, 22 Oct 2010 05:05:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702742#M42468</guid>
      <dc:creator>AnthonySN</dc:creator>
      <dc:date>2010-10-22T05:05:33Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702743#M42469</link>
      <description>&amp;gt; presumably there would be some gui&lt;BR /&gt;option of conga that will disable failback.Is that correct ?&lt;BR /&gt;&lt;BR /&gt;Yes. See attached screenshot. (server &amp;amp; failover domain names blurred out for anonymity)&lt;BR /&gt;&lt;BR /&gt;This is the view of Conga modifying the configuration of a failover domain. The relevant checkbox is named "Do not fail back services in this domain".&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Some one told me that red hat does not have "disable failback"&lt;BR /&gt;option as HP SG , an alternate way would be to disable the rgmanager&lt;BR /&gt;service in primary node [...]&lt;BR /&gt;&lt;BR /&gt;This may or may not have been true in the past (i.e. with the RedHat Cluster versions for RHEL 2.1, RHEL 3 and maybe RHEL 4), but it certainly isn't true for RHEL 5. From your kernel version, I see you have RHEL 5. Your "someone" has given you old or plainly incorrect information.&lt;BR /&gt;&lt;BR /&gt;There definitely is a "disable failback" option in RHEL 5's cluster, so there is no reason for hacky workarounds like that.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Fri, 22 Oct 2010 09:48:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702743#M42469</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-10-22T09:48:48Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702744#M42470</link>
      <description>Thanks Matti and Anthony ...&lt;BR /&gt;i will install conga ...update the thread(and points) .Just as a matter of curosity ....Anthony from Imtac.&lt;BR /&gt;&lt;BR /&gt;Regards,</description>
      <pubDate>Mon, 25 Oct 2010 15:28:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702744#M42470</guid>
      <dc:creator>DeafFrog</dc:creator>
      <dc:date>2010-10-25T15:28:24Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702745#M42471</link>
      <description>I have a redhat cluster with two nodes, I need to implement nodes failover failback. When Server1 go offline, the services and shared resources should failover to Server2. When the Server1 comes up the services and resources which were failover to Server2 should fail back to Server1 again.&lt;BR /&gt;&lt;BR /&gt;Kindly suggest me, if anyone knows the exact implementation steps.</description>
      <pubDate>Wed, 11 May 2011 07:57:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702745#M42471</guid>
      <dc:creator>Ann123</dc:creator>
      <dc:date>2011-05-11T07:57:55Z</dc:date>
    </item>
    <item>
      <title>Re: red hat cluster failback</title>
      <link>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702746#M42472</link>
      <description>hi Ann , &lt;BR /&gt;&lt;BR /&gt;The solution is to use luci ,as posted by Matti above .Luci is  gui RED HAT Linux cluster  configuration tool , install the correct RPM for luci for your destro  , then &lt;BR /&gt;service httpd start&lt;BR /&gt;service luci start&lt;BR /&gt;&lt;BR /&gt;access the url at &lt;YOUR_SERVER_IP&gt;:8084.&lt;BR /&gt;&lt;BR /&gt;Rest is GUI and self-explanatory &lt;BR /&gt;&lt;BR /&gt;Regards,&lt;/YOUR_SERVER_IP&gt;</description>
      <pubDate>Wed, 11 May 2011 11:30:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/red-hat-cluster-failback/m-p/4702746#M42472</guid>
      <dc:creator>DeafFrog</dc:creator>
      <dc:date>2011-05-11T11:30:37Z</dc:date>
    </item>
  </channel>
</rss>

