<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic LVM Commands Hang on SLES 12 SP 1  HAE Cluster in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974015#M55031</link>
    <description>&lt;P&gt;All LVM commands hang similar to this:&lt;/P&gt;&lt;P&gt;# vgscan -vvvv&lt;BR /&gt;#lvmcmdline.c:1466 DEGRADED MODE. Incomplete RAID LVs will be processed.&lt;BR /&gt;#libdm-config.c:997 Setting activation/monitoring to 1&lt;BR /&gt;#lvmcmdline.c:1472 Processing: vgscan -vvvv&lt;BR /&gt;#lvmcmdline.c:1473 system ID:&lt;BR /&gt;#lvmcmdline.c:1476 O_DIRECT will be used&lt;BR /&gt;#libdm-config.c:933 Setting global/locking_type to 3&lt;BR /&gt;#libdm-config.c:997 Setting global/wait_for_locks to 1&lt;BR /&gt;#locking/locking.c:155 Cluster locking selected.&lt;/P&gt;&lt;P&gt;I noticed that in /etc/lvm/lvm.conf, that "locking_type=3" was set. &amp;nbsp; I read elsewhere on this forum that changing that to 1 can resolve the problem, however, I think that type is required for HAE (corosync) clustering. &amp;nbsp;I don't want to bring down the cluster. &amp;nbsp;&lt;/P&gt;&lt;P&gt;Help!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 16 Aug 2017 18:01:21 GMT</pubDate>
    <dc:creator>cjhsa</dc:creator>
    <dc:date>2017-08-16T18:01:21Z</dc:date>
    <item>
      <title>LVM Commands Hang on SLES 12 SP 1  HAE Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974015#M55031</link>
      <description>&lt;P&gt;All LVM commands hang similar to this:&lt;/P&gt;&lt;P&gt;# vgscan -vvvv&lt;BR /&gt;#lvmcmdline.c:1466 DEGRADED MODE. Incomplete RAID LVs will be processed.&lt;BR /&gt;#libdm-config.c:997 Setting activation/monitoring to 1&lt;BR /&gt;#lvmcmdline.c:1472 Processing: vgscan -vvvv&lt;BR /&gt;#lvmcmdline.c:1473 system ID:&lt;BR /&gt;#lvmcmdline.c:1476 O_DIRECT will be used&lt;BR /&gt;#libdm-config.c:933 Setting global/locking_type to 3&lt;BR /&gt;#libdm-config.c:997 Setting global/wait_for_locks to 1&lt;BR /&gt;#locking/locking.c:155 Cluster locking selected.&lt;/P&gt;&lt;P&gt;I noticed that in /etc/lvm/lvm.conf, that "locking_type=3" was set. &amp;nbsp; I read elsewhere on this forum that changing that to 1 can resolve the problem, however, I think that type is required for HAE (corosync) clustering. &amp;nbsp;I don't want to bring down the cluster. &amp;nbsp;&lt;/P&gt;&lt;P&gt;Help!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2017 18:01:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974015#M55031</guid>
      <dc:creator>cjhsa</dc:creator>
      <dc:date>2017-08-16T18:01:21Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Commands Hang on SLES 12 SP 1  HAE Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974030#M55032</link>
      <description>&lt;P&gt;Also, in the output of dmesg, I see this, over and over:&lt;/P&gt;&lt;P&gt;floppy: error -5 while reading block 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2017 18:33:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974030#M55032</guid>
      <dc:creator>cjhsa</dc:creator>
      <dc:date>2017-08-16T18:33:28Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Commands Hang on SLES 12 SP 1  HAE Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974094#M55033</link>
      <description>&lt;P&gt;This indicates that your system is unable to get cluster locking from all of the clustered member nodes. Check out the cluster status on all cluster member nodes and make sure the "clvmd" (in Suse 12 this may be different) service and related services are up. You may need to look into the clustered logs to see what is happening.. this is just a hint..&lt;/P&gt;</description>
      <pubDate>Thu, 17 Aug 2017 08:18:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974094#M55033</guid>
      <dc:creator>simplylinuxfaq</dc:creator>
      <dc:date>2017-08-17T08:18:09Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Commands Hang on SLES 12 SP 1  HAE Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974135#M55034</link>
      <description>&lt;P&gt;They LOOK fine. &amp;nbsp;clvmd is running on both. &amp;nbsp;And:&lt;/P&gt;&lt;P&gt;hostname1 # crm_mon -1&lt;BR /&gt;Last updated: Thu Aug 17 12:25:08 2017 Last change: Mon May 22 14:27:58 2017 by hacluster via crmd on&amp;nbsp;hostname1&lt;BR /&gt;Stack: corosync&lt;BR /&gt;Current DC: hostname1 (version 1.1.13-17.2-6f22ad7) - partition with quorum&lt;BR /&gt;2 nodes and 40 resources configured&lt;/P&gt;&lt;P&gt;Online: [ hostname1 hostname2&amp;nbsp;]&lt;/P&gt;&lt;P&gt;.... &amp;nbsp;(packages look fine)&lt;/P&gt;&lt;P&gt;hostname2 # crm_mon -1&lt;BR /&gt;Last updated: Thu Aug 17 12:28:47 2017 Last change: Mon May 22 14:27:58 2017 by hacluster via crmd on&amp;nbsp;hostname1&lt;BR /&gt;Stack: corosync&lt;BR /&gt;Current DC: hostname1 (version 1.1.13-17.2-6f22ad7) - partition with quorum&lt;BR /&gt;2 nodes and 40 resources configured&lt;/P&gt;&lt;P&gt;Online: [ hostname1 hostname2&amp;nbsp;]&lt;/P&gt;</description>
      <pubDate>Thu, 17 Aug 2017 14:49:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974135#M55034</guid>
      <dc:creator>cjhsa</dc:creator>
      <dc:date>2017-08-17T14:49:31Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Commands Hang on SLES 12 SP 1  HAE Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974328#M55037</link>
      <description>&lt;P&gt;It is possible that the clvmd process exists but is hanging for some reason.&lt;/P&gt;&lt;P&gt;You might try running "clvmd -R": it tells all the clvmd processes within the cluster to reread the LVM configuration file and reload their device cache. That should be safe to run at any time. If that command produces any errors, the errors might indicate which cluster node has clvmd in a not-sane state.&lt;/P&gt;&lt;P&gt;On that cluster node, you might try running "clvmd -S" to tell the clvmd to exit and re-execute itself, maintaining any locks it held before the restart. But it is possible that you might have to shutdown any clustered applications on that node, then perhaps reboot the node to clear the hang and to ensure that all the cluster software components will again be running in a correct state.&lt;/P&gt;&lt;P&gt;You might also consider checking the patch notes for any clvmd updates available; I think this resembles a clvmd bug that we used to have on RedHat. That bug was fixed quite a while ago.&lt;/P&gt;</description>
      <pubDate>Sun, 20 Aug 2017 16:51:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-commands-hang-on-sles-12-sp-1-hae-cluster/m-p/6974328#M55037</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2017-08-20T16:51:47Z</dc:date>
    </item>
  </channel>
</rss>

