<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: GFS configuration in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703906#M42507</link>
    <description>The files ha.cf, haressources and authkeys are associated with the Linux cluster solution called "Heartbeat", not with the RedHat Cluster Suite. Without the capitalization, "heartbeat" is also a generic cluster concept.&lt;BR /&gt;&lt;BR /&gt;Unfortunately it looks like the Heartbeat cluster infrastructure does not provide the cluster-wide locking services required by GFS or GFS2. It seems to allow for failover-type clusters only.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; is it possible to use gfs without lock_dlm? &lt;BR /&gt;&lt;BR /&gt;No, unless you build your own cluster-wide lock management solution and integrate it with GFS. That would require some very careful programming, and a lot of testing after that.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; how can i unlock it?&lt;BR /&gt;&lt;BR /&gt;You aren't even asking the right question :(&lt;BR /&gt;&lt;BR /&gt;Lock_dlm is not required for the purpose of "unlocking the GFS filesystem", but for _locking_ individual files and certain critical parts of the GFS filesystem whenever one of the cluster members is making changes to them. Without the protection of the locks, another cluster member might try to change the same thing in a different way at the same time, causing filesystem corruption.&lt;BR /&gt;&lt;BR /&gt;Lock_dlm (or some other cluster lock protocol) is an *essential* part of GFS implementation: if you try to use GFS in a cluster without an appropriate cluster lock protocol, the GFS filesystem will get corrupted just like a regular filesystem that is being accessed by two or more hosts simultaneously.&lt;BR /&gt;&lt;BR /&gt;MK</description>
    <pubDate>Sun, 24 Oct 2010 13:08:01 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2010-10-24T13:08:01Z</dc:date>
    <item>
      <title>GFS configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703903#M42504</link>
      <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;I have an error message during a GFS creation:&lt;BR /&gt;&lt;BR /&gt;[root@toto /]# mount.gfs2 /dev/dm-10 home01/&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: can't connect to gfs_controld: Connection refused&lt;BR /&gt;mount.gfs2: gfs_controld not running&lt;BR /&gt;mount.gfs2: error mounting lockproto lock_dlm&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;i need your assistance please</description>
      <pubDate>Sun, 24 Oct 2010 08:28:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703903#M42504</guid>
      <dc:creator>ats1</dc:creator>
      <dc:date>2010-10-24T08:28:14Z</dc:date>
    </item>
    <item>
      <title>Re: GFS configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703904#M42505</link>
      <description>Perhaps your cluster infrastructure is not running?&lt;BR /&gt;&lt;BR /&gt;Your GFS2 filesystem is configured to use the "lock_dlm" lock protocol. This is required if you want to access the GFS2 filesystem on two or more hosts at the same time. If you want to use GFS2 on a single node only, you could select the dummy "lock_nolock" lock protocol. &lt;BR /&gt;&lt;BR /&gt;To successfully use "lock_dlm", the kernel's DLM subsystem must be up and running, which requires that you must have set up a basic cluster configuration: a cluster with no services defined yet, but with heartbeat and fencing configured and running.&lt;BR /&gt;&lt;BR /&gt;Normally the gfs_controld daemon starts along with other cluster daemons when you run "service cman start". Looks like your cman start-up has not been successful. You should fix that first.&lt;BR /&gt;&lt;BR /&gt;Please show the output of "service cman status". &lt;BR /&gt;&lt;BR /&gt;The outputs of "cman_tool status", "cman_tool nodes" and "cman_tool services" might be useful too.&lt;BR /&gt;&lt;BR /&gt;Here's a technical description of the things that need to happen at GFS/GFS2 filesystem mount time:&lt;BR /&gt;&lt;A href="http://people.redhat.com/teigland/rhel5-cluster-infrastructure.txt" target="_blank"&gt;http://people.redhat.com/teigland/rhel5-cluster-infrastructure.txt&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Sun, 24 Oct 2010 10:37:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703904#M42505</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-10-24T10:37:29Z</dc:date>
    </item>
    <item>
      <title>Re: GFS configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703905#M42506</link>
      <description>i use ha.cf, haressources and authkeys to configure my cluster with redhat5. i am not use cluster suite.&lt;BR /&gt;help me.is it possible to use gfs without lock_dlm? how can i unlock it?</description>
      <pubDate>Sun, 24 Oct 2010 10:44:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703905#M42506</guid>
      <dc:creator>ats1</dc:creator>
      <dc:date>2010-10-24T10:44:37Z</dc:date>
    </item>
    <item>
      <title>Re: GFS configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703906#M42507</link>
      <description>The files ha.cf, haressources and authkeys are associated with the Linux cluster solution called "Heartbeat", not with the RedHat Cluster Suite. Without the capitalization, "heartbeat" is also a generic cluster concept.&lt;BR /&gt;&lt;BR /&gt;Unfortunately it looks like the Heartbeat cluster infrastructure does not provide the cluster-wide locking services required by GFS or GFS2. It seems to allow for failover-type clusters only.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; is it possible to use gfs without lock_dlm? &lt;BR /&gt;&lt;BR /&gt;No, unless you build your own cluster-wide lock management solution and integrate it with GFS. That would require some very careful programming, and a lot of testing after that.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; how can i unlock it?&lt;BR /&gt;&lt;BR /&gt;You aren't even asking the right question :(&lt;BR /&gt;&lt;BR /&gt;Lock_dlm is not required for the purpose of "unlocking the GFS filesystem", but for _locking_ individual files and certain critical parts of the GFS filesystem whenever one of the cluster members is making changes to them. Without the protection of the locks, another cluster member might try to change the same thing in a different way at the same time, causing filesystem corruption.&lt;BR /&gt;&lt;BR /&gt;Lock_dlm (or some other cluster lock protocol) is an *essential* part of GFS implementation: if you try to use GFS in a cluster without an appropriate cluster lock protocol, the GFS filesystem will get corrupted just like a regular filesystem that is being accessed by two or more hosts simultaneously.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Sun, 24 Oct 2010 13:08:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/gfs-configuration/m-p/4703906#M42507</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-10-24T13:08:01Z</dc:date>
    </item>
  </channel>
</rss>

