<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic RH update 2 Clustering configuration in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714803#M21444</link>
    <description>We are in the tesing phase of RH update 2 clustering with gfs.&lt;BR /&gt;&lt;BR /&gt;We have set up a cluster with two GFS filesystems on shared storage and are encountering some unusual behavior.&lt;BR /&gt;&lt;BR /&gt;dlm configuration the cluster comes up with the configuration file below. Problem is that the cluster does not fail over as configured. We bring down node 1 and node 2 just sits there instead of coming online and mounting the filesystems.&lt;BR /&gt;&lt;BR /&gt;We have the standard documents form the RH site and are working on the issue. This post is not complete, but I will add to it as I gather details.&lt;BR /&gt;&lt;BR /&gt;Configuration file:&lt;BR /&gt;-----------&lt;BR /&gt;&lt;BR /&gt;&lt;CLUSTER config_version="7" name="ES4"&gt;&lt;BR /&gt;        &lt;FENCE_DAEMON clean_start="0" post_fail_delay="0" post_join_delay="3"&gt;&lt;/FENCE_DAEMON&gt;&lt;BR /&gt;        &lt;CLUSTERNODES&gt;&lt;BR /&gt;                &lt;CLUSTERNODE name="nz1" votes="1"&gt;&lt;BR /&gt;                        &lt;FENCE&gt;&lt;/FENCE&gt;&lt;BR /&gt;                &lt;/CLUSTERNODE&gt;&lt;BR /&gt;                &lt;CLUSTERNODE name="nz2" votes="1"&gt;&lt;BR /&gt;                        &lt;FENCE&gt;&lt;/FENCE&gt;&lt;BR /&gt;                &lt;/CLUSTERNODE&gt;&lt;BR /&gt;        &lt;/CLUSTERNODES&gt;&lt;BR /&gt;        &lt;CMAN expected_votes="1" two_node="1"&gt;&lt;/CMAN&gt;&lt;BR /&gt;        &lt;FENCEDEVICES&gt;&lt;BR /&gt;                &lt;FENCEDEVICE agent="fence_brocade" ipaddr="10.36.4.179" login="root" name="fc" passwd="password"&gt;&lt;/FENCEDEVICE&gt;&lt;BR /&gt;        &lt;/FENCEDEVICES&gt;&lt;BR /&gt;        &lt;RM&gt;&lt;BR /&gt;                &lt;FAILOVERDOMAINS&gt;&lt;BR /&gt;                        &lt;FAILOVERDOMAIN name="es" ordered="0" restricted="0"&gt;&lt;BR /&gt;                                &lt;FAILOVERDOMAINNODE name="nz1" priority="1"&gt;&lt;/FAILOVERDOMAINNODE&gt;&lt;BR /&gt;                                &lt;FAILOVERDOMAINNODE name="nz2" priority="1"&gt;&lt;/FAILOVERDOMAINNODE&gt;&lt;BR /&gt;                        &lt;/FAILOVERDOMAIN&gt;&lt;BR /&gt;                &lt;/FAILOVERDOMAINS&gt;&lt;BR /&gt;                &lt;RESOURCES&gt;&lt;BR /&gt;                        &lt;CLUSTERFS device="/dev/vgdb/lvol0" force_unmount="1" fstype="gfs" mountpoint="/oradata/SSR" name="ssr" options="rw"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                        &lt;CLUSTERFS device="/dev/vgdb/lvol2" force_unmount="1" fstype="gfs" mountpoint="/oradata/EMMG" name="emmg" options="rw"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                &lt;/RESOURCES&gt;&lt;BR /&gt;                &lt;SERVICE autostart="1" domain="es" name="test"&gt;&lt;BR /&gt;                        &lt;CLUSTERFS ref="ssr"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                        &lt;CLUSTERFS ref="emmg"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                &lt;/SERVICE&gt;&lt;BR /&gt;        &lt;/RM&gt;&lt;BR /&gt;&lt;/CLUSTER&gt;&lt;BR /&gt; ----------------&lt;BR /&gt;&lt;BR /&gt;Both nodes have it.&lt;BR /&gt;&lt;BR /&gt;In the morning(IST), I'll provide details from the syslog.&lt;BR /&gt;&lt;BR /&gt;What I want to know is the following:&lt;BR /&gt;1) Anyone have RH 4 update 2 clustering with GFS in production. If so, can you share a conf file?&lt;BR /&gt;2) If you do not believe the software is production quality let me know.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Sun, 22 Jan 2006 14:50:52 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2006-01-22T14:50:52Z</dc:date>
    <item>
      <title>RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714803#M21444</link>
      <description>We are in the tesing phase of RH update 2 clustering with gfs.&lt;BR /&gt;&lt;BR /&gt;We have set up a cluster with two GFS filesystems on shared storage and are encountering some unusual behavior.&lt;BR /&gt;&lt;BR /&gt;dlm configuration the cluster comes up with the configuration file below. Problem is that the cluster does not fail over as configured. We bring down node 1 and node 2 just sits there instead of coming online and mounting the filesystems.&lt;BR /&gt;&lt;BR /&gt;We have the standard documents form the RH site and are working on the issue. This post is not complete, but I will add to it as I gather details.&lt;BR /&gt;&lt;BR /&gt;Configuration file:&lt;BR /&gt;-----------&lt;BR /&gt;&lt;BR /&gt;&lt;CLUSTER config_version="7" name="ES4"&gt;&lt;BR /&gt;        &lt;FENCE_DAEMON clean_start="0" post_fail_delay="0" post_join_delay="3"&gt;&lt;/FENCE_DAEMON&gt;&lt;BR /&gt;        &lt;CLUSTERNODES&gt;&lt;BR /&gt;                &lt;CLUSTERNODE name="nz1" votes="1"&gt;&lt;BR /&gt;                        &lt;FENCE&gt;&lt;/FENCE&gt;&lt;BR /&gt;                &lt;/CLUSTERNODE&gt;&lt;BR /&gt;                &lt;CLUSTERNODE name="nz2" votes="1"&gt;&lt;BR /&gt;                        &lt;FENCE&gt;&lt;/FENCE&gt;&lt;BR /&gt;                &lt;/CLUSTERNODE&gt;&lt;BR /&gt;        &lt;/CLUSTERNODES&gt;&lt;BR /&gt;        &lt;CMAN expected_votes="1" two_node="1"&gt;&lt;/CMAN&gt;&lt;BR /&gt;        &lt;FENCEDEVICES&gt;&lt;BR /&gt;                &lt;FENCEDEVICE agent="fence_brocade" ipaddr="10.36.4.179" login="root" name="fc" passwd="password"&gt;&lt;/FENCEDEVICE&gt;&lt;BR /&gt;        &lt;/FENCEDEVICES&gt;&lt;BR /&gt;        &lt;RM&gt;&lt;BR /&gt;                &lt;FAILOVERDOMAINS&gt;&lt;BR /&gt;                        &lt;FAILOVERDOMAIN name="es" ordered="0" restricted="0"&gt;&lt;BR /&gt;                                &lt;FAILOVERDOMAINNODE name="nz1" priority="1"&gt;&lt;/FAILOVERDOMAINNODE&gt;&lt;BR /&gt;                                &lt;FAILOVERDOMAINNODE name="nz2" priority="1"&gt;&lt;/FAILOVERDOMAINNODE&gt;&lt;BR /&gt;                        &lt;/FAILOVERDOMAIN&gt;&lt;BR /&gt;                &lt;/FAILOVERDOMAINS&gt;&lt;BR /&gt;                &lt;RESOURCES&gt;&lt;BR /&gt;                        &lt;CLUSTERFS device="/dev/vgdb/lvol0" force_unmount="1" fstype="gfs" mountpoint="/oradata/SSR" name="ssr" options="rw"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                        &lt;CLUSTERFS device="/dev/vgdb/lvol2" force_unmount="1" fstype="gfs" mountpoint="/oradata/EMMG" name="emmg" options="rw"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                &lt;/RESOURCES&gt;&lt;BR /&gt;                &lt;SERVICE autostart="1" domain="es" name="test"&gt;&lt;BR /&gt;                        &lt;CLUSTERFS ref="ssr"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                        &lt;CLUSTERFS ref="emmg"&gt;&lt;/CLUSTERFS&gt;&lt;BR /&gt;                &lt;/SERVICE&gt;&lt;BR /&gt;        &lt;/RM&gt;&lt;BR /&gt;&lt;/CLUSTER&gt;&lt;BR /&gt; ----------------&lt;BR /&gt;&lt;BR /&gt;Both nodes have it.&lt;BR /&gt;&lt;BR /&gt;In the morning(IST), I'll provide details from the syslog.&lt;BR /&gt;&lt;BR /&gt;What I want to know is the following:&lt;BR /&gt;1) Anyone have RH 4 update 2 clustering with GFS in production. If so, can you share a conf file?&lt;BR /&gt;2) If you do not believe the software is production quality let me know.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Sun, 22 Jan 2006 14:50:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714803#M21444</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-01-22T14:50:52Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714804#M21445</link>
      <description>Hi SEP, are you using Red Hat GFS? Is supposed that all nodes can access to the file system simultaneously, you don't have to umount and mount it.&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Jan 2006 01:44:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714804#M21445</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2006-01-23T01:44:41Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714805#M21446</link>
      <description>gfs_mount for access though right? Otherwise how does access begin?&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 23 Jan 2006 04:01:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714805#M21446</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-01-23T04:01:54Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714806#M21447</link>
      <description>You should run:&lt;BR /&gt;&lt;BR /&gt;mount -t gfs BlockDevice MountPoint -o option&lt;BR /&gt;&lt;BR /&gt;For example:&lt;BR /&gt;&lt;BR /&gt;mount -t gfs /dev/pool/pool0 /gfs1&lt;BR /&gt;&lt;BR /&gt;For initial access.</description>
      <pubDate>Mon, 23 Jan 2006 05:37:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714806#M21447</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2006-01-23T05:37:37Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714807#M21448</link>
      <description>update:&lt;BR /&gt;&lt;BR /&gt;Cluster is now functional. Its very important NOT to have two NIC cards coming up on the same network unless they are bonded.&lt;BR /&gt;&lt;BR /&gt;My original issue was caused by this and cman would not start on both nodes. The NIC involved is very flakey and unknown to me had magically revived itself and gotten an IP address from DHCP.&lt;BR /&gt;&lt;BR /&gt;This scenario is very, very, very bad.&lt;BR /&gt;&lt;BR /&gt;Now we're having fancing troubles.&lt;BR /&gt;&lt;BR /&gt;When the fence to a brocade switch locks a port, it stays locked unless someone manually intervenes to stop the lock.&lt;BR /&gt;&lt;BR /&gt;We have been told a script can be written to reset fencing locks on brocade switch ports. There is a bunny in it for someone that submits a working script.&lt;BR /&gt;&lt;BR /&gt;Also:&lt;BR /&gt;&lt;BR /&gt;We'd like to know if anyone is using APC power switches as fence devices. If so we'd like to know the model number that is in use and see configuration files if possible.&lt;BR /&gt;&lt;BR /&gt;Also II: &lt;BR /&gt;HP server ilo is supported.&lt;BR /&gt;&lt;BR /&gt;IBM servers Slimfast Remote supervisor adapter. This appears to not be supported at this time by RH clustering. Anyone using it as a fence device anyway? If so, how, scripts and config files earn bunnies. Anyone have a solid doc from RH on whether this type of fencing device is in use and how.&lt;BR /&gt;&lt;BR /&gt;Our conclusion at this time is that dlm locking is production quality and glum locking is not. Opinions?&lt;BR /&gt;&lt;BR /&gt;Lots and lots of points available here.&lt;BR /&gt;&lt;BR /&gt;Inquring minds want to know.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Wed, 01 Feb 2006 04:44:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714807#M21448</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-02-01T04:44:20Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714808#M21449</link>
      <description>Brocade switches have Linux OS. SSH is enabled, so you can try something like:&lt;BR /&gt;&lt;BR /&gt;ssh admin@sanswitch portenable portnumber&lt;BR /&gt;&lt;BR /&gt;The only problem will be the password specification. Maybe you are able to generate a public key pair without passphrase if you logon to the sanswitch as root.</description>
      <pubDate>Wed, 01 Feb 2006 13:44:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714808#M21449</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2006-02-01T13:44:58Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714809#M21450</link>
      <description>Seems we're running into RH functionality.&lt;BR /&gt;&lt;BR /&gt;If node1 detects a problem on node2 it fences node2 off the storage. This makes it inoeprable but does not force a boot.&lt;BR /&gt;&lt;BR /&gt;Potentially node2 can still be online and holding tight to a floating IP address that node1 needs to handle failover properly.&lt;BR /&gt;&lt;BR /&gt;The answer seems to be a custom script like what was suggested in the prior post or using iLo fencing, which would boot node2 immediately causing failover to node1.&lt;BR /&gt;&lt;BR /&gt;Problem is IBM's equivalent of ILO, and for that matter Dell's is not supported by RH.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Wed, 01 Feb 2006 18:43:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714809#M21450</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-02-01T18:43:57Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714810#M21451</link>
      <description>Interesting problem.&lt;BR /&gt;&lt;BR /&gt;Cluster was built on machine:&lt;BR /&gt;linux1&lt;BR /&gt;Second node was added called:&lt;BR /&gt;linux2&lt;BR /&gt;&lt;BR /&gt;No matter where the packages are running if a normal shutdown -ry is run on either node, they fail over properly if needed. Service remains online.&lt;BR /&gt;&lt;BR /&gt;If I power switch linux2, all packages fail over to linux1 in a reasonable time.&lt;BR /&gt;&lt;BR /&gt;If I power switch linux1, the cluster freezes up.&lt;BR /&gt;&lt;BR /&gt;clustat produces no results.&lt;BR /&gt;&lt;BR /&gt;fencing is manual and is posted above. The problem is created by the manual fence. It fences off linux1 but linux2 can not function.&lt;BR /&gt;&lt;BR /&gt;There is supposed to be a command that can be run that can force the cluster to continue running in the circumstance that its frozen.&lt;BR /&gt;&lt;BR /&gt;I have two bunny eligible questions:&lt;BR /&gt;&lt;BR /&gt;What command can I use to force linux2 to take over the cluster? Or what change can I make to make this automatic?&lt;BR /&gt;&lt;BR /&gt;attaching a more current cluster.con file.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 06 Feb 2006 05:05:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714810#M21451</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-02-06T05:05:09Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714811#M21452</link>
      <description>Duh,&lt;BR /&gt;&lt;BR /&gt;Command is fenc_ack_manual -n &lt;NODENAME&gt;&lt;BR /&gt;&lt;BR /&gt;Why the heck doesn't the cluster configuration do this itself?&lt;BR /&gt;&lt;BR /&gt;SEP&lt;/NODENAME&gt;</description>
      <pubDate>Mon, 06 Feb 2006 05:09:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714811#M21452</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-02-06T05:09:51Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714812#M21453</link>
      <description>Type much?&lt;BR /&gt; fence_ack_manual -n &lt;NODENAME&gt;&lt;BR /&gt;&lt;BR /&gt;SEP&lt;/NODENAME&gt;</description>
      <pubDate>Mon, 06 Feb 2006 05:16:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714812#M21453</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-02-06T05:16:12Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714813#M21454</link>
      <description>fence_ack_manual -O -n &lt;NODENAME&gt;&lt;BR /&gt;&lt;BR /&gt;The dash O bypassed the manual aknowledgement.&lt;BR /&gt;&lt;BR /&gt;Don't use this in a cluster that includes shared storage.&lt;BR /&gt;&lt;BR /&gt;SEP&lt;BR /&gt;&lt;/NODENAME&gt;</description>
      <pubDate>Mon, 06 Feb 2006 05:38:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714813#M21454</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-02-06T05:38:12Z</dc:date>
    </item>
    <item>
      <title>Re: RH update 2 Clustering configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714814#M21455</link>
      <description>Anyone ever run samba in one of these guys?&lt;BR /&gt;&lt;BR /&gt;How do you handle the net join issues?&lt;BR /&gt;&lt;BR /&gt;I'm not talking to myself I hope. Even a comment with garner you folks a point or two.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Tue, 07 Feb 2006 08:37:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rh-update-2-clustering-configuration/m-p/3714814#M21455</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-02-07T08:37:49Z</dc:date>
    </item>
  </channel>
</rss>

