<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic multipath -ll showing [faulty] path in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600009#M82572</link>
    <description>Hello,&lt;BR /&gt;&lt;BR /&gt;we are running SLES 10 SP2 with device-mapper-1.02.13-6.14 with eight paths to an EVA LUN.&lt;BR /&gt;&lt;BR /&gt;There has been a path interuption in one fabric and now multipath -ll is giving me this output:&lt;BR /&gt;&lt;BR /&gt;\_ round-robin 0 [prio=100][active]&lt;BR /&gt; \_ 5:0:3:1 sdh 8:112 [active][ready]&lt;BR /&gt; \_ 5:0:2:1 sdg 8:96  [active][ready]&lt;BR /&gt; \_ 4:0:3:1 sdd 8:48  [failed][faulty]&lt;BR /&gt; \_ 4:0:2:1 sdc 8:32  [failed][faulty]&lt;BR /&gt;\_ round-robin 0 [prio=20][enabled]&lt;BR /&gt; \_ 5:0:1:1 sdf 8:80  [active][ready]&lt;BR /&gt; \_ 5:0:0:1 sde 8:64  [active][ready]&lt;BR /&gt; \_ 4:0:1:1 sdb 8:16  [failed][faulty]&lt;BR /&gt; \_ 4:0:0:1 sda 8:0   [failed][faulty]&lt;BR /&gt;&lt;BR /&gt;However, if I look at the paths with adapter_info (opt/hp/hp_fibreutils), I can see that all eight paths are online again. Each of the eight is transmitting requests (so I see I/O requests going through both fabrics).&lt;BR /&gt;&lt;BR /&gt;My questions are:&lt;BR /&gt;1) Shouldn't multipath -ll update the state of the paths automatically to [active][ready] once they are available again?&lt;BR /&gt;2) Will it help to issue a multipath -v1 command to update the multipath map?&lt;BR /&gt;3) Will the command in 2) disrupt traffic to the SAN LUN and impact the running OS (it is a BFS blade with this LUN holding the root FS)?&lt;BR /&gt;&lt;BR /&gt;Thanks a lot for clarification and best regards.&lt;BR /&gt;&lt;BR /&gt;Markus&lt;BR /&gt;</description>
    <pubDate>Fri, 12 Mar 2010 13:58:19 GMT</pubDate>
    <dc:creator>Markus Wiedner</dc:creator>
    <dc:date>2010-03-12T13:58:19Z</dc:date>
    <item>
      <title>multipath -ll showing [faulty] path</title>
      <link>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600009#M82572</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;we are running SLES 10 SP2 with device-mapper-1.02.13-6.14 with eight paths to an EVA LUN.&lt;BR /&gt;&lt;BR /&gt;There has been a path interuption in one fabric and now multipath -ll is giving me this output:&lt;BR /&gt;&lt;BR /&gt;\_ round-robin 0 [prio=100][active]&lt;BR /&gt; \_ 5:0:3:1 sdh 8:112 [active][ready]&lt;BR /&gt; \_ 5:0:2:1 sdg 8:96  [active][ready]&lt;BR /&gt; \_ 4:0:3:1 sdd 8:48  [failed][faulty]&lt;BR /&gt; \_ 4:0:2:1 sdc 8:32  [failed][faulty]&lt;BR /&gt;\_ round-robin 0 [prio=20][enabled]&lt;BR /&gt; \_ 5:0:1:1 sdf 8:80  [active][ready]&lt;BR /&gt; \_ 5:0:0:1 sde 8:64  [active][ready]&lt;BR /&gt; \_ 4:0:1:1 sdb 8:16  [failed][faulty]&lt;BR /&gt; \_ 4:0:0:1 sda 8:0   [failed][faulty]&lt;BR /&gt;&lt;BR /&gt;However, if I look at the paths with adapter_info (opt/hp/hp_fibreutils), I can see that all eight paths are online again. Each of the eight is transmitting requests (so I see I/O requests going through both fabrics).&lt;BR /&gt;&lt;BR /&gt;My questions are:&lt;BR /&gt;1) Shouldn't multipath -ll update the state of the paths automatically to [active][ready] once they are available again?&lt;BR /&gt;2) Will it help to issue a multipath -v1 command to update the multipath map?&lt;BR /&gt;3) Will the command in 2) disrupt traffic to the SAN LUN and impact the running OS (it is a BFS blade with this LUN holding the root FS)?&lt;BR /&gt;&lt;BR /&gt;Thanks a lot for clarification and best regards.&lt;BR /&gt;&lt;BR /&gt;Markus&lt;BR /&gt;</description>
      <pubDate>Fri, 12 Mar 2010 13:58:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600009#M82572</guid>
      <dc:creator>Markus Wiedner</dc:creator>
      <dc:date>2010-03-12T13:58:19Z</dc:date>
    </item>
    <item>
      <title>Re: multipath -ll showing [faulty] path</title>
      <link>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600010#M82573</link>
      <description>hi,&lt;BR /&gt;&lt;BR /&gt;check the connection to you SAN.&lt;BR /&gt;&lt;BR /&gt;mikap</description>
      <pubDate>Fri, 12 Mar 2010 14:05:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600010#M82573</guid>
      <dc:creator>Michal Kapalka (mikap)</dc:creator>
      <dc:date>2010-03-12T14:05:24Z</dc:date>
    </item>
    <item>
      <title>Re: multipath -ll showing [faulty] path</title>
      <link>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600011#M82574</link>
      <description>Do you have the multipathd daemon running?&lt;BR /&gt;&lt;BR /&gt;It is the element that periodically checks for failed paths, and presumably also checks if the previously-failed paths become functional again.&lt;BR /&gt;&lt;BR /&gt;I guess the kernel dm-multipath module would be smart enough to stop using a path if it produces errors, but it would not necessarily automatically resume using it when it starts to work again.&lt;BR /&gt;&lt;BR /&gt;1) multipath -l and -ll just display the current state: I would not expect them to update anything.&lt;BR /&gt;&lt;BR /&gt;2) Yes, it might help.&lt;BR /&gt;&lt;BR /&gt;3.) If it feels it should change the configuration of a multipath device that is already in use it won't do it and will produce a "path in use" error message instead. But unless your WWIDs have somehow changed, there should be no reason for it to change the existing devices.&lt;BR /&gt;So it should be harmless.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Fri, 12 Mar 2010 15:24:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600011#M82574</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-03-12T15:24:34Z</dc:date>
    </item>
    <item>
      <title>Re: multipath -ll showing [faulty] path</title>
      <link>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600012#M82575</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;thanks for your replies so far!&lt;BR /&gt;&lt;BR /&gt;Now, I've given the multipath -v2 command a try and received the following output for each of the four devices that are in the fabric in which the path interruption occured.&lt;BR /&gt;&lt;BR /&gt;sdd: not found in pathvec&lt;BR /&gt;sdd: mask = 0x1f&lt;BR /&gt;sdd: dev_t = 8:48&lt;BR /&gt;sdd: size = 419430400&lt;BR /&gt;sdd: subsystem = scsi&lt;BR /&gt;sdd: vendor = HP&lt;BR /&gt;sdd: product = HSV210&lt;BR /&gt;sdd: rev = 6200&lt;BR /&gt;sdd: h:b:t:l = 4:0:3:1&lt;BR /&gt;sdd: serial =&lt;BR /&gt;sdd: getuid = /sbin/scsi_id -g -u -s /block/%n (controller setting)&lt;BR /&gt;error calling out /sbin/scsi_id -g -u -s /block/sdd&lt;BR /&gt;sdd: prio = alua (controller setting)&lt;BR /&gt;sdd: couln't get supported alua states&lt;BR /&gt;sdd: alua prio error&lt;BR /&gt;error calling out /sbin/scsi_id -g -u -s /block/sdd&lt;BR /&gt;&lt;BR /&gt;Again, I'm almost 100 percent sure that physically, the paths in that fabric are working fine. &lt;BR /&gt;&lt;BR /&gt;What makes me certain is "adapter_info -d 4" (with 4 being the HBA pointing to the fabric in question) showing me:&lt;BR /&gt;&lt;BR /&gt;LUNs&lt;BR /&gt;----------&lt;BR /&gt;( 0: 0): Total reqs 3, Pending reqs 0, flags 0x0*, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 0: 1): Total reqs 5286631, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 1: 0): Total reqs 3, Pending reqs 0, flags 0x0*, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 1: 1): Total reqs 5263887, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 2: 0): Total reqs 3, Pending reqs 0, flags 0x0*, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 2: 1): Total reqs 24103445, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 3: 0): Total reqs 3, Pending reqs 0, flags 0x0*, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 3: 1): Total reqs 24082057, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;&lt;BR /&gt;The I/O requests are increasing whenever I refresh the output, so there is traffic going to the LUN through that fabric.&lt;BR /&gt;&lt;BR /&gt;Only that multipath -ll has not recognized that yet and the question is: where is the problem?&lt;BR /&gt;&lt;BR /&gt;Interestingly enough, output of multipath -ll has changed over the weekend to:&lt;BR /&gt;&lt;BR /&gt; \_ round-robin 0 [prio=100][active]&lt;BR /&gt; \_ 5:0:3:1 sdh 8:112 [active][ready]&lt;BR /&gt; \_ 5:0:2:1 sdg 8:96  [active][ready]&lt;BR /&gt; \_ 4:0:3:1 sdd 8:48  [failed][faulty]&lt;BR /&gt; \_ 4:0:2:1 sdc 8:32  [failed][faulty]&lt;BR /&gt;\_ round-robin 0 [prio=20][enabled]&lt;BR /&gt; \_ 5:0:1:1 sdf 8:80  [active][ready]&lt;BR /&gt; \_ 5:0:0:1 sde 8:64  [active][ready]&lt;BR /&gt; \_ 4:0:1:1 sdb 8:16  [active][faulty]&lt;BR /&gt; \_ 4:0:0:1 sda 8:0   [active][faulty]&lt;BR /&gt;&lt;BR /&gt;So the Device Mapper status for sdb and sda has changed from failed to active. However, the path status is still faulty.&lt;BR /&gt;&lt;BR /&gt;My next idea is to simply restart the multipathd Daemon and see what happens. &lt;BR /&gt;Does anyone think that this will significantly interrupt or otherwise impact I/O going to the LUN (we BFS from this only LUN of the server)? &lt;BR /&gt;&lt;BR /&gt;Hints and comments are very welcome! &lt;BR /&gt;&lt;BR /&gt;Thx,&lt;BR /&gt;&lt;BR /&gt;Markus</description>
      <pubDate>Mon, 15 Mar 2010 08:31:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600012#M82575</guid>
      <dc:creator>Markus Wiedner</dc:creator>
      <dc:date>2010-03-15T08:31:35Z</dc:date>
    </item>
    <item>
      <title>Re: multipath -ll showing [faulty] path</title>
      <link>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600013#M82576</link>
      <description>did you configure multipath correctly for EVA devices? different SAN devices need different settings (i don't know what EVA needs, only EMC here).</description>
      <pubDate>Mon, 15 Mar 2010 14:31:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600013#M82576</guid>
      <dc:creator>dirk dierickx</dc:creator>
      <dc:date>2010-03-15T14:31:50Z</dc:date>
    </item>
    <item>
      <title>Re: multipath -ll showing [faulty] path</title>
      <link>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600014#M82577</link>
      <description>I checked again, and my assumption that the paths to the SAN are back again was wrong.&lt;BR /&gt; &lt;BR /&gt;I got mislead by the output of adapter_info (hp_fibreutils), seeing requests for every path:&lt;BR /&gt;&lt;BR /&gt;LUNs&lt;BR /&gt;----------&lt;BR /&gt;( 0: 0): Total reqs 246, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 0: 1): Total reqs 6076381, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 1: 0): Total reqs 246, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 1: 1): Total reqs 6053637, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 2: 0): Total reqs 246, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 2: 1): Total reqs 24893276, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 3: 0): Total reqs 246, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;( 3: 1): Total reqs 24871807, Pending reqs 0, flags 0x0, Dflags 0x0, 0:0:1000 00&lt;BR /&gt;&lt;BR /&gt;I realized that all the paths are pointing to 1000 and cross checked with /opt/hp/hp_fibreutils/lssd -w whether the sd$ devices are actually bound to the SAN LUN - they are not! &lt;BR /&gt;&lt;BR /&gt;So Device Mapper does not have a problem but the connection to the SAN. I will do a hp_rescan -a and if that doesn't help I will need to reboot the server.</description>
      <pubDate>Tue, 16 Mar 2010 11:32:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/multipath-ll-showing-faulty-path/m-p/4600014#M82577</guid>
      <dc:creator>Markus Wiedner</dc:creator>
      <dc:date>2010-03-16T11:32:17Z</dc:date>
    </item>
  </channel>
</rss>

