HPE EVA Storage

optimum hba queue depth

Occasional Advisor

optimum hba queue depth


i have an sles9 using qlogic HBA w/ a default queue depth of 16. this is connected to an msa2000. i increased the hba queue depth to 64 hoping to increase i/o performance but got scsi busy errors (0x20000). Do you guys know how to check the msa2000 queue depth so i can base my hba adjustments from that?

Frequent Advisor

Re: optimum hba queue depth

Hi, 16 should be fine. Have you added something like this:

options qla2xxx ql2xmaxqdepth=16 qlport_down_retry=10 ql2xloginretrycount=30

in /etc/modprobe.conf?

Also, are you using multipath? If this is the case, edit /etc/multipath.conf following this schema (VALID FOR MSA2012fc/MSA2212fc/MSA2012i. If you have another model, the values may differ!)


### ADD THE FOLLOWING BLACKLIST AFTER"defaults user_friendly_names yes" SECTION
## Blacklist non-SAN devices
devnode_blacklist {
devnode "^sd[a-z]"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*"

# add the following under "devices" (UNCOMMENT THE SECTION FIRST!!!)
# section:
# For MSA2012fc/MSA2212fc/MSA2012i
vendor "HP"
product "MSA2[02]12fc|MSA2012i"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
hardware_handler "0"
path_selector "round-robin 0"
path_grouping_policy multibus
failback immediate
rr_weight uniform
no_path_retry 18
rr_min_io 100
path_checker tur

Finally, have you rebuilt the initrd image used by the system at boot time? That could definitely be part of the issue, as in many cases (depending on vendor/OS coupling) if you don't do it, you don't have the correct values for all those parameters correctly set.

You can rebuild initrd image using "mkinitrd". Make sure you make a backup copy of your current initrd, and to modify your grub.conf file so that you can choose what initrd to use at boot time. Doing this, in the unfortunate event of something going wrong at the next boot with the new initrd, you can still boot using the old one. Kernel panic may happen, so do this on your responsibility.


Re: optimum hba queue depth


thanks for your reply. yup, basically, i've done all of those. for now i've reverted the change to do away w/ the errors. back to normal. i'm gonna try to check out what's the most optimal value for the queue depth. btw, have you tried checking out if your setup is able to maximize it's current queue depth?

Frequent Advisor

Re: optimum hba queue depth

To be honest, the system with the configuration I've shown above is not under heavy load for now, so, at least in my case, it looks like queue depth set to 16 is ok...

Have you tried working on the opposite side, I mean, change some parms on the storage? A possible reason for the 0x20000 could be that no_path_retry is not enough to queue all the suff that's trying to get written to the disks. So you could try something like this in multipath.cfg:

no_path_retry "queue"

(and rebuild initrd). Doing so, you should be able to get the storage array to queue all the I/O, before failing a path or throwing errors there will be more "buffering" room. Keep in mind that if you have REAL reason for that 0x20000 error, with this setting you could maybe avoid the error but everything could anyway result in a very slowly responding system, because of the "queue".

On the HBA side, you could work a bit on the disk timeout settings (check qlogic website for info)...

Hopefully somebody else in the forum will ass more ideas.

Frequent Advisor

Re: optimum hba queue depth

Add.. I was meaning ADD! :-) Sorry for the funny typo :-)