<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: optimum hba queue depth in HPE EVA Storage</title>
    <link>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505922#M40622</link>
    <description>To be honest, the system with the configuration I've shown above is not under heavy load for now, so, at least in my case, it looks like queue depth set to 16 is ok...&lt;BR /&gt;&lt;BR /&gt;Have you tried working on the opposite side, I mean, change some parms on the storage? A possible reason for the 0x20000 could be that no_path_retry is not enough to queue all the suff that's trying to get written to the disks. So you could try something like this in multipath.cfg:&lt;BR /&gt;&lt;BR /&gt;no_path_retry "queue"&lt;BR /&gt;&lt;BR /&gt;(and rebuild initrd). Doing so, you should be able to get the storage array to queue all the I/O, before failing a path or throwing errors there will be more "buffering" room. Keep in mind that if you have REAL reason for that 0x20000 error, with this setting you could maybe avoid the error but everything could anyway result in a very slowly responding system, because of the "queue".&lt;BR /&gt;&lt;BR /&gt;On the HBA side, you could work a bit on the disk timeout settings (check qlogic website for info)...&lt;BR /&gt;&lt;BR /&gt;Hopefully somebody else in the forum will ass more ideas.&lt;BR /&gt;&lt;BR /&gt;Bye!</description>
    <pubDate>Thu, 01 Oct 2009 11:09:07 GMT</pubDate>
    <dc:creator>Rob_69_1</dc:creator>
    <dc:date>2009-10-01T11:09:07Z</dc:date>
    <item>
      <title>optimum hba queue depth</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505919#M40619</link>
      <description>&lt;BR /&gt;Hi.&lt;BR /&gt;&lt;BR /&gt;i have an sles9 using qlogic HBA w/ a default queue depth of 16. this is connected to an msa2000. i increased the hba queue depth to 64 hoping to increase i/o performance but got scsi busy errors (0x20000). Do you guys know how to check the msa2000 queue depth so i can base my hba adjustments from that?&lt;BR /&gt;&lt;BR /&gt;thanks!</description>
      <pubDate>Thu, 01 Oct 2009 01:09:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505919#M40619</guid>
      <dc:creator>glennix</dc:creator>
      <dc:date>2009-10-01T01:09:36Z</dc:date>
    </item>
    <item>
      <title>Re: optimum hba queue depth</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505920#M40620</link>
      <description>Hi, 16 should be fine. Have you added something like this:&lt;BR /&gt;&lt;BR /&gt;---&lt;BR /&gt;options qla2xxx ql2xmaxqdepth=16 qlport_down_retry=10 ql2xloginretrycount=30&lt;BR /&gt;---&lt;BR /&gt;&lt;BR /&gt;in /etc/modprobe.conf?&lt;BR /&gt;&lt;BR /&gt;Also, are you using multipath? If this is the case, edit /etc/multipath.conf following this schema (VALID FOR MSA2012fc/MSA2212fc/MSA2012i. If you have another model, the values may differ!)&lt;BR /&gt;&lt;BR /&gt;---&lt;BR /&gt;# COMMENT THE WHOLE BLACKLISTED DEVICES SECTION ON TOP&lt;BR /&gt;# TO ENABLE HP+QLOGIC MULTIPATH&lt;BR /&gt;&lt;BR /&gt;### ADD THE FOLLOWING BLACKLIST AFTER"defaults user_friendly_names yes" SECTION&lt;BR /&gt;## Blacklist non-SAN devices&lt;BR /&gt;devnode_blacklist {&lt;BR /&gt;devnode "^sd[a-z]"&lt;BR /&gt;devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"&lt;BR /&gt;devnode "^hd[a-z]"&lt;BR /&gt;devnode "^cciss!c[0-9]d[0-9]*"&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;# add the following under "devices" (UNCOMMENT THE SECTION FIRST!!!)&lt;BR /&gt;# section:&lt;BR /&gt;# For MSA2012fc/MSA2212fc/MSA2012i&lt;BR /&gt;device&lt;BR /&gt;{&lt;BR /&gt;vendor "HP"&lt;BR /&gt;product "MSA2[02]12fc|MSA2012i"&lt;BR /&gt;getuid_callout "/sbin/scsi_id -g -u -s /block/%n"&lt;BR /&gt;hardware_handler "0"&lt;BR /&gt;path_selector "round-robin 0"&lt;BR /&gt;path_grouping_policy multibus&lt;BR /&gt;failback immediate&lt;BR /&gt;rr_weight uniform&lt;BR /&gt;no_path_retry 18&lt;BR /&gt;rr_min_io 100&lt;BR /&gt;path_checker tur&lt;BR /&gt;}&lt;BR /&gt;---&lt;BR /&gt;&lt;BR /&gt;Finally, have you rebuilt the initrd image used by the system at boot time? That could definitely be part of the issue, as in many cases (depending on vendor/OS coupling) if you don't do it, you don't have the correct values for all those parameters correctly set.&lt;BR /&gt;&lt;BR /&gt;You can rebuild initrd image using "mkinitrd". Make sure you make a backup copy of your current initrd, and to modify your grub.conf file so that you can choose what initrd to use at boot time. Doing this, in the unfortunate event of something going wrong at the next boot with the new initrd, you can still boot using the old one. Kernel panic may happen, so do this on your responsibility.&lt;BR /&gt;&lt;BR /&gt;HTH</description>
      <pubDate>Thu, 01 Oct 2009 08:54:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505920#M40620</guid>
      <dc:creator>Rob_69_1</dc:creator>
      <dc:date>2009-10-01T08:54:46Z</dc:date>
    </item>
    <item>
      <title>Re: optimum hba queue depth</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505921#M40621</link>
      <description>hi,&lt;BR /&gt;&lt;BR /&gt;thanks for your reply. yup, basically, i've done all of those. for now i've reverted the change to do away w/ the errors. back to normal. i'm gonna try to check out what's the most optimal value for the queue depth. btw, have you tried checking out if your setup is able to maximize it's current queue depth?&lt;BR /&gt;&lt;BR /&gt;cheers!&lt;BR /&gt;</description>
      <pubDate>Thu, 01 Oct 2009 09:37:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505921#M40621</guid>
      <dc:creator>glennn</dc:creator>
      <dc:date>2009-10-01T09:37:00Z</dc:date>
    </item>
    <item>
      <title>Re: optimum hba queue depth</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505922#M40622</link>
      <description>To be honest, the system with the configuration I've shown above is not under heavy load for now, so, at least in my case, it looks like queue depth set to 16 is ok...&lt;BR /&gt;&lt;BR /&gt;Have you tried working on the opposite side, I mean, change some parms on the storage? A possible reason for the 0x20000 could be that no_path_retry is not enough to queue all the suff that's trying to get written to the disks. So you could try something like this in multipath.cfg:&lt;BR /&gt;&lt;BR /&gt;no_path_retry "queue"&lt;BR /&gt;&lt;BR /&gt;(and rebuild initrd). Doing so, you should be able to get the storage array to queue all the I/O, before failing a path or throwing errors there will be more "buffering" room. Keep in mind that if you have REAL reason for that 0x20000 error, with this setting you could maybe avoid the error but everything could anyway result in a very slowly responding system, because of the "queue".&lt;BR /&gt;&lt;BR /&gt;On the HBA side, you could work a bit on the disk timeout settings (check qlogic website for info)...&lt;BR /&gt;&lt;BR /&gt;Hopefully somebody else in the forum will ass more ideas.&lt;BR /&gt;&lt;BR /&gt;Bye!</description>
      <pubDate>Thu, 01 Oct 2009 11:09:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505922#M40622</guid>
      <dc:creator>Rob_69_1</dc:creator>
      <dc:date>2009-10-01T11:09:07Z</dc:date>
    </item>
    <item>
      <title>Re: optimum hba queue depth</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505923#M40623</link>
      <description>Add.. I was meaning ADD! :-) Sorry for the funny typo :-)</description>
      <pubDate>Thu, 01 Oct 2009 11:10:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/optimum-hba-queue-depth/m-p/4505923#M40623</guid>
      <dc:creator>Rob_69_1</dc:creator>
      <dc:date>2009-10-01T11:10:51Z</dc:date>
    </item>
  </channel>
</rss>

