<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: lolck disk problem in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/6695733#M56933</link>
    <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Somebody coded CLUSTER_LOCK_LUN checks wrong within cmcheckconf, it&amp;nbsp;can't take standard mapper names (too long).&lt;BR /&gt;&lt;BR /&gt;SO, my LOCK_LUN is /dev/mapper/3600&lt;EM&gt;blah-blah&lt;/EM&gt;0000_part1&amp;nbsp;.&amp;nbsp; I put in a rename at the top of 'start' case within&amp;nbsp;&amp;nbsp;&amp;nbsp; /etc/init.d/cmcluster script as such :&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;&amp;nbsp;&lt;STRONG&gt;&lt;FONT color="#0000ff"&gt;/sbin/dmsetup rename /dev/mapper/3600&lt;EM&gt;blah-blah&lt;/EM&gt;0000 CLUDISK&amp;nbsp;&amp;nbsp;2&amp;gt;/dev/null&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;&lt;STRONG&gt;&lt;FONT color="#0000ff"&gt;&amp;nbsp;/sbin/dmsetup rename /dev/mapper/3600&lt;EM&gt;blah-blah&lt;/EM&gt;0000_part1 CLUDISK_part1&amp;nbsp;&amp;nbsp;2&amp;gt;/dev/null&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;NOTE: you MUST have underscore&amp;nbsp;&amp;nbsp; "..&amp;nbsp; _part1"&amp;nbsp; , the err redirect because it may already exist&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And used it in clu conf file:&lt;BR /&gt;&lt;STRONG&gt;&lt;FONT color="#0000ff"&gt;CLUSTER_LOCK_LUN&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /dev/mapper/CLUDISK_part1&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is much easier than tweaking udev stuff which I first started to look at.&amp;nbsp; You must REALLY do the above so your disk is serviced via multipathing. Otherwise your clu is&amp;nbsp;hostage to just one SAN path to LOCK disk.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;KEYWORDS:&lt;/P&gt;&lt;P&gt;Serviceguard for Linux SAPeSG Service Guard SuSE&lt;/P&gt;&lt;P&gt;Value specified for CLUSTER_LOCK_LUN at line is too long. Its length should not exceed characters&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 16 Jan 2015 19:11:07 GMT</pubDate>
    <dc:creator>Stephen_126</dc:creator>
    <dc:date>2015-01-16T19:11:07Z</dc:date>
    <item>
      <title>lolck disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414506#M56928</link>
      <description>Hi - I hawe 2 node linux (SLES10 sp2) serviceguard cluster. Storage we are using is EVA. I have configured lock disk on EVA. What happens is - when I reboot node1, node2 is also rebooting because it can not get lock disk. Node1 survive if I reboot node2. Lock lun is accesible from both nodes...&lt;BR /&gt;How can I find out whai is going on - Thank you.&lt;BR /&gt;&lt;BR /&gt;Trehe is log file:&lt;BR /&gt;May  6 15:25:01 opera2 cmcld[25936]: Obtaining Cluster Lock&lt;BR /&gt;May  6 15:25:01 opera2 cmdisklockd[25959]: Obtaining cluster lock device /dev/sdi1&lt;BR /&gt;May  6 15:25:01 opera2 cmdisklockd[25959]: Unable to obtain the lock!&lt;BR /&gt;May  6 15:25:01 opera2 cmcld[25936]: Attempting to form a new cluster&lt;BR /&gt;May  6 15:25:01 opera2 cmcld[25936]: Beginning standard election&lt;BR /&gt;May  6 15:25:07 opera2 cmcld[25936]: Obtaining Cluster Lock&lt;BR /&gt;May  6 15:25:07 opera2 cmdisklockd[25959]: Obtaining cluster lock device /dev/sdi1&lt;BR /&gt;May  6 15:25:07 opera2 cmdisklockd[25959]: Unable to obtain the lock!&lt;BR /&gt;May  6 15:25:07 opera2 cmcld[25936]: Attempting to form a new cluster&lt;BR /&gt;May  6 15:25:07 opera2 cmcld[25936]: Beginning standard election&lt;BR /&gt;May  6 15:25:12 opera2 cmcld[25936]: Obtaining Cluster Lock&lt;BR /&gt;May  6 15:25:12 opera2 cmdisklockd[25959]: Obtaining cluster lock device /dev/sdi1&lt;BR /&gt;May  6 15:25:12 opera2 cmdisklockd[25959]: Unable to obtain the lock!&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 06 May 2009 18:04:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414506#M56928</guid>
      <dc:creator>edi_4</dc:creator>
      <dc:date>2009-05-06T18:04:04Z</dc:date>
    </item>
    <item>
      <title>Re: lolck disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414507#M56929</link>
      <description>In Linux, the mapping between the actual storage LUNs and /dev/sd* devices is not at all guaranteed to stay the same before &amp;amp; after a reboot. You should specify your lock device using a device path that is guaranteed to be persistent across reboots - anything else is asking for trouble.&lt;BR /&gt;&lt;BR /&gt;Is /dev/sdi1 *really* your lock disk device *now*? It certainly was back when you originally set up ServiceGuard, but that does not say anything about the current situation.&lt;BR /&gt;&lt;BR /&gt;Please run "fdisk -l /dev/sdi" to verify that the LUN actually contains the lock partition. If it doesn't, you'll need to change your cluster ASCII file to point to the lock LUN using some persistent device name, and re-apply the cluster configuration.&lt;BR /&gt;&lt;BR /&gt;I'm not very familiar with SLES, but Google tells me SLES10 has dm-multipath just like RHEL 4 and newer. Apparently the name of the necessary package is "multipath-tools". Make sure it is installed.&lt;BR /&gt;&lt;BR /&gt;Then run "multipathd -v2" to initialize the multipath system, then "multipath -l" to see the mapping between the multipath device names and the regular /dev/sd* devices. Make sure "multipathd" gets started automatically at boot and start it now if necessary: it is responsible for updating the multipath mappings automatically.&lt;BR /&gt;&lt;BR /&gt;The multipath device name for your lock disk will be something like "/dev/mapper/&lt;SOMETHING&gt;p1", where &lt;SOMETHING&gt; will either be the WWID of the multipathed disk (=a string of hex digits) or a "mpathX"-style "friendly name". The "p1" at the end is the partition identifier. SLES seems to default to WWIDs, while RedHat uses friendly names. If you don't like the default naming style, change it in /etc/multipath.conf.&lt;BR /&gt;&lt;BR /&gt;If you don't want to use multipathing for some reason, check the /dev/disk/by-* directories: in Linux distributions with modern udev, these will offer various ways to identify your disk devices in a persistent manner. &lt;BR /&gt;&lt;BR /&gt;MK&lt;/SOMETHING&gt;&lt;/SOMETHING&gt;</description>
      <pubDate>Wed, 06 May 2009 19:51:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414507#M56929</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2009-05-06T19:51:13Z</dc:date>
    </item>
    <item>
      <title>Re: lolck disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414508#M56930</link>
      <description>Matti is right about making sure that your multi-pathing is correct and that you have the disks set up with persistent names (check docs and cert matrix for more info).   &lt;BR /&gt;&lt;BR /&gt;Also, you didn't say whether this ever worked or not.  Depending on the EVA make sure your FW is up-to-date.  Some of the older EVAs were active passive devices that could possibly cause this.</description>
      <pubDate>Thu, 07 May 2009 01:58:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414508#M56930</guid>
      <dc:creator>Serviceguard for Linux</dc:creator>
      <dc:date>2009-05-07T01:58:43Z</dc:date>
    </item>
    <item>
      <title>Re: lolck disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414509#M56931</link>
      <description>Thank's for replay - Sometimes happens it works - a lock disk is accesible. The name of lock disk survive reboot - it is always sdi1. If already tried to use udev but the name of the lock disk is to long. I just dont know how to remame it. &lt;BR /&gt;&lt;BR /&gt; cluster conf file is: &lt;BR /&gt;&lt;BR /&gt;NODE_NAME               opera2&lt;BR /&gt;  NETWORK_INTERFACE     bond0&lt;BR /&gt;    STATIONARY_IP       192.168.99.107&lt;BR /&gt;  NETWORK_INTERFACE     bond1&lt;BR /&gt;    HEARTBEAT_IP        192.168.240.2&lt;BR /&gt; # CLUSTER_LOCK_LUN     /dev/sdi1&lt;BR /&gt;CLUSTER_LOCK_LUN      /dev/mapper/3600508b40006836800017000010e0000-part1&lt;BR /&gt;&lt;BR /&gt;opera2:/edi # cmcheckconf -v -C file1&lt;BR /&gt;Begin cluster verification...&lt;BR /&gt;Checking cluster file: file1&lt;BR /&gt;Value specified for CLUSTER_LOCK_LUN at line 104 is too long. Its length should not exceed 39 charaters&lt;BR /&gt;cmcheckconf: Error found in cluster file: file1.&lt;BR /&gt;opera2:/edi #&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Lock LUN from EVA is 9:&lt;BR /&gt;&lt;BR /&gt;opera1:/dev/mapper # multipath -l&lt;BR /&gt;3600508b40006836800017000010e0000 dm-18 HP,HSV200&lt;BR /&gt;[size=2.0G][features=1 queue_if_no_path][hwhandler=0]&lt;BR /&gt;\_ round-robin 0 [prio=-2][active]&lt;BR /&gt; \_ 1:0:1:9  sdcf       69:48  [active][undef]&lt;BR /&gt; \_ 0:0:1:9  sdah       66:16  [active][undef]&lt;BR /&gt;\_ round-robin 0 [prio=-2][enabled]&lt;BR /&gt; \_ 1:0:0:9  sdbg       67:160 [active][undef]&lt;BR /&gt; \_ 0:0:0:9  sdi        8:128  [active][undef]&lt;BR /&gt;&lt;BR /&gt;opera2:/etc/init.d # multipath -l&lt;BR /&gt;3600508b40006836800017000010e0000 dm-18 HP,HSV200&lt;BR /&gt;[size=2.0G][features=1 queue_if_no_path][hwhandler=0]&lt;BR /&gt;\_ round-robin 0 [prio=-2][active]&lt;BR /&gt; \_ 1:0:1:9  sdcf       69:48  [active][undef]&lt;BR /&gt; \_ 0:0:0:9  sdi        8:128  [active][undef]&lt;BR /&gt;\_ round-robin 0 [prio=-2][enabled]&lt;BR /&gt; \_ 1:0:0:9  sdbg       67:160 [active][undef]&lt;BR /&gt; \_ 0:0:1:9  sdah       66:16  [active][undef]&lt;BR /&gt;&lt;BR /&gt;opera1:/dev/mapper # dmsetup ls | grep 10e&lt;BR /&gt;3600508b40006836800017000010e0000       (253, 18)&lt;BR /&gt;3600508b40006836800017000010e0000-part1 (253, 37)&lt;BR /&gt;opera1:/dev/mapper #&lt;BR /&gt;&lt;BR /&gt;opera2:/etc/init.d # dmsetup ls | grep 10e&lt;BR /&gt;3600508b40006836800017000010e0000       (253, 18)&lt;BR /&gt;3600508b40006836800017000010e0000-part1 (253, 39)&lt;BR /&gt;opera2:/etc/init.d #&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 07 May 2009 05:25:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414509#M56931</guid>
      <dc:creator>edi_4</dc:creator>
      <dc:date>2009-05-07T05:25:12Z</dc:date>
    </item>
    <item>
      <title>Re: lolck disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414510#M56932</link>
      <description>The only way to survive reboot is to change NODE_IDLE_TIMEOUT to 20sec. Then the lock disk can be obtained - cluster become lazy &lt;BR /&gt;it needs 4 minutes to reconfigure...&lt;BR /&gt;Perhaps I missing sometething - or there is bug...</description>
      <pubDate>Thu, 07 May 2009 18:03:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/4414510#M56932</guid>
      <dc:creator>edi_4</dc:creator>
      <dc:date>2009-05-07T18:03:11Z</dc:date>
    </item>
    <item>
      <title>Re: lolck disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/6695733#M56933</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Somebody coded CLUSTER_LOCK_LUN checks wrong within cmcheckconf, it&amp;nbsp;can't take standard mapper names (too long).&lt;BR /&gt;&lt;BR /&gt;SO, my LOCK_LUN is /dev/mapper/3600&lt;EM&gt;blah-blah&lt;/EM&gt;0000_part1&amp;nbsp;.&amp;nbsp; I put in a rename at the top of 'start' case within&amp;nbsp;&amp;nbsp;&amp;nbsp; /etc/init.d/cmcluster script as such :&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;&amp;nbsp;&lt;STRONG&gt;&lt;FONT color="#0000ff"&gt;/sbin/dmsetup rename /dev/mapper/3600&lt;EM&gt;blah-blah&lt;/EM&gt;0000 CLUDISK&amp;nbsp;&amp;nbsp;2&amp;gt;/dev/null&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT size="1"&gt;&lt;STRONG&gt;&lt;FONT color="#0000ff"&gt;&amp;nbsp;/sbin/dmsetup rename /dev/mapper/3600&lt;EM&gt;blah-blah&lt;/EM&gt;0000_part1 CLUDISK_part1&amp;nbsp;&amp;nbsp;2&amp;gt;/dev/null&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;NOTE: you MUST have underscore&amp;nbsp;&amp;nbsp; "..&amp;nbsp; _part1"&amp;nbsp; , the err redirect because it may already exist&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And used it in clu conf file:&lt;BR /&gt;&lt;STRONG&gt;&lt;FONT color="#0000ff"&gt;CLUSTER_LOCK_LUN&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /dev/mapper/CLUDISK_part1&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is much easier than tweaking udev stuff which I first started to look at.&amp;nbsp; You must REALLY do the above so your disk is serviced via multipathing. Otherwise your clu is&amp;nbsp;hostage to just one SAN path to LOCK disk.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;KEYWORDS:&lt;/P&gt;&lt;P&gt;Serviceguard for Linux SAPeSG Service Guard SuSE&lt;/P&gt;&lt;P&gt;Value specified for CLUSTER_LOCK_LUN at line is too long. Its length should not exceed characters&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Jan 2015 19:11:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lolck-disk-problem/m-p/6695733#M56933</guid>
      <dc:creator>Stephen_126</dc:creator>
      <dc:date>2015-01-16T19:11:07Z</dc:date>
    </item>
  </channel>
</rss>

