<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Failover using multipathd on Linux 5.4 in HPE EVA Storage</title>
    <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537679#M41635</link>
    <description>I have qlport_down_retry set to 30. Most of my servers has the hba/qlogic level fail over set to false and fail over is takan care by power path/array. But i have only a  very few machines running with multipath.</description>
    <pubDate>Mon, 23 Nov 2009 05:23:02 GMT</pubDate>
    <dc:creator>skt_skt</dc:creator>
    <dc:date>2009-11-23T05:23:02Z</dc:date>
    <item>
      <title>Failover using multipathd on Linux 5.4</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537675#M41631</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Since HP has told us we need to start using the native mulipathd and Qlogic HBA driver starting with Linux 5.3, I've been trying it on a test box running Oracle Enterprise Linux 5.4 64-bit, using two QLogic HBA's to an EVA 8100.&lt;BR /&gt;&lt;BR /&gt;I've got multipathd working in theory, but it takes a long time to do a failover. I'll run an "ls -l" on a SAN-mouted LUN after disabling one of the two fiber ports that the server is connecting to, and it will take 1-2 minutes before the command works.&lt;BR /&gt;&lt;BR /&gt;Is this expected behavior, or is there a way to make it shorter? I don't want to risk downtime if an Oracle database on one of these LUNs loses connectivity. I've played with some of the multipathd.conf settings without much luck.&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Fri, 20 Nov 2009 20:37:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537675#M41631</guid>
      <dc:creator>Tim Barton</dc:creator>
      <dc:date>2009-11-20T20:37:18Z</dc:date>
    </item>
    <item>
      <title>Re: Failover using multipathd on Linux 5.4</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537676#M41632</link>
      <description>Hi Tim&lt;BR /&gt;&lt;BR /&gt;The Release Notes of HP's own Device Mapper Enablement Kit has a section with the recommended device parameters:&lt;BR /&gt;&lt;A href="http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?prodTypeId=12169&amp;amp;prodSeriesId=3559651&amp;amp;swItem=co-77761-1&amp;amp;prodNameId=3559652&amp;amp;swEnvOID=4035&amp;amp;swLang=13&amp;amp;taskId=135&amp;amp;mode=4&amp;amp;idx=0#Device_par_val" target="_blank"&gt;http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?prodTypeId=12169&amp;amp;prodSeriesId=3559651&amp;amp;swItem=co-77761-1&amp;amp;prodNameId=3559652&amp;amp;swEnvOID=4035&amp;amp;swLang=13&amp;amp;taskId=135&amp;amp;mode=4&amp;amp;idx=0#Device_par_val&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Scroll down a bit to see the QLogic HBA parameters. I think these also influence the failover time. They recommend "ql2xmaxqdepth=16 qlport_down_retry=10 ql2xloginretrycount=30" with the native QLogic drivers.</description>
      <pubDate>Sat, 21 Nov 2009 16:56:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537676#M41632</guid>
      <dc:creator>Michael Leu</dc:creator>
      <dc:date>2009-11-21T16:56:36Z</dc:date>
    </item>
    <item>
      <title>Re: Failover using multipathd on Linux 5.4</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537677#M41633</link>
      <description>what are the default values you have for those parms? Even with default it should not take this much time.&lt;BR /&gt;&lt;BR /&gt;I doubt if you need to tune at qlogic level as the communication between Multipath+array settings  should take care of this</description>
      <pubDate>Sun, 22 Nov 2009 07:45:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537677#M41633</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-11-22T07:45:18Z</dc:date>
    </item>
    <item>
      <title>Re: Failover using multipathd on Linux 5.4</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537678#M41634</link>
      <description>Have this in modprobe.conf:&lt;BR /&gt;&lt;BR /&gt;options qla2xxx ql2xmaxqdepth=16 ql2xloginretrycount=30 qlport_down_retry=64&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Tried taking the qlport_down_retry parameter, but that didn't help. I'll try changing it to 10</description>
      <pubDate>Sun, 22 Nov 2009 19:51:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537678#M41634</guid>
      <dc:creator>Tim Barton</dc:creator>
      <dc:date>2009-11-22T19:51:44Z</dc:date>
    </item>
    <item>
      <title>Re: Failover using multipathd on Linux 5.4</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537679#M41635</link>
      <description>I have qlport_down_retry set to 30. Most of my servers has the hba/qlogic level fail over set to false and fail over is takan care by power path/array. But i have only a  very few machines running with multipath.</description>
      <pubDate>Mon, 23 Nov 2009 05:23:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537679#M41635</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-11-23T05:23:02Z</dc:date>
    </item>
    <item>
      <title>Re: Failover using multipathd on Linux 5.4</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537680#M41636</link>
      <description>Looks like those recommended settings did the trick. Failover response time is now just a few seconds, which should be fine.&lt;BR /&gt;Thanks!</description>
      <pubDate>Mon, 30 Nov 2009 19:46:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537680#M41636</guid>
      <dc:creator>Tim Barton</dc:creator>
      <dc:date>2009-11-30T19:46:58Z</dc:date>
    </item>
    <item>
      <title>Re: Failover using multipathd on Linux 5.4</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537681#M41637</link>
      <description>did u tune all the three parms Tim mentioned? can u confirm what is the current values u have?</description>
      <pubDate>Sat, 05 Dec 2009 12:44:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/failover-using-multipathd-on-linux-5-4/m-p/4537681#M41637</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-12-05T12:44:49Z</dc:date>
    </item>
  </channel>
</rss>

