1828238 Members
3015 Online
109975 Solutions
New Discussion

SCSI max_queue_depth

 
SOLVED
Go to solution
Tim Medford
Valued Contributor

SCSI max_queue_depth

We are getting ready to attach our rp5470 running 11.11 to an IBM Shark storage array via fibre channel. The way it is configured it will present 2x210gb luns.

IBM is telling me that for these device files I should use the scsictl command to increase the max_queue_depth to (256 / #luns) or in other words 128!!

That seems awfully high to me. I've never heard of setting this higher than 16 or 32. Does anyone have experience with this?

Thanks,
Tim

Here is the doc from IBM.

Setting the queue depth for the HP-UX operating system with SCSI adapters Prerequisites: Before you set the queue depth, connect the host system to the ESS. See “General information about attaching to an open-systems host with SCSI adapters” on page 13. Steps: Perform the following steps to set the queue depth on an HP-UX operating system: 1. Use the following formula to set the queue depth for all classes of HP-UX: 256 ÷ maximum number of LUNs = queue depthNote: Although this algorithm implies that the upper limit for the number of LUNs on an adapter is 256, HP-UX supports up to 1024 LUNs. 2. You must monitor configurations with greater than 256 LUNs. You must adjust the queue depth for optimum performance. 3. To update the queue depth by device level, type the following command: scsictl -m queue_depth=21 /dev/rdsk/$dsksf where /dev/rdsk/$dsksf is the device node. 4. To make a global change to the queue depth, edit the kernel parameter so that it equals scsi_max_qd
4 REPLIES 4
Alzhy
Honored Contributor

Re: SCSI max_queue_depth

They're correct.. in fact

The EVA as do any other "large LUN" presentation arrays will require you to increase queue depth either globally (via kernel parm scsi_max_qdepth) or per LUN via scsictl command.

Hakuna Matata.
TwoProc
Honored Contributor
Solution

Re: SCSI max_queue_depth

Keep in mind that the scsi queue depth commands don't "stick", and that you'll have to re-run the script that does this each time you reboot the server. So you'll need to create a script and make sure it gets called on boot up.

Also, I don't particularly like using the "big lun" theory. I believe that you'll get more performance by having more luns presented to the system each representing less disk space. The folks who recommend and implement the installs from HP's own storage group that I've been exposed to also feel the same way. However, many really good administrators on this forum (whose opinions are greatly respected by myself and many others) feel that this gain is negligible to non-existent. Without serious study and testing - it remains arguable, and of course then the argument would remain as to whether or not your data use would fit the model of the test results.

In any case using the "sar -d" command during busy load times and watching the average queue depth gives you the information needed to determine if your queue depth is enough (default seems to be 8). Use that approach even after making the changes to see if the scsi queue depth needs further tuning once you're up and doing some heavy transactions. As far as I know, there are no detrimental effects of having this setting "too high" although I'm sure there would be needless consumption of OS memory resources. How much? A comment from anyone who knows how much memory/resources a unit of scsi queue depth consumes would be helpful at this point.

Also, this can be tuned on the fly but, frankly I'm not a big fan of making changes like this in a live environment while databases are in use. However, if you're going to do some pressure testing before going live on this new storage system ( and I recommend that you do ) then making these changes on the fly would be fine/recommended to find out where the setting should be for starters before go-live with the new storage server.

We are the people our parents warned us about --Jimmy Buffett
Alzhy
Honored Contributor

Re: SCSI max_queue_depth

For the EVA and certain "controller-centric" arrays at least (I don't know very much in depth the IBM ESS array), a big LUN (hence a big volume/Filesystem) vis a vis smaller LUNs assembled/striped into a one or several volumes will offer the same performance if qdpeth is properly tuned.

The only advantage with the latter (stripes) is that you won't have any "hot" spot during heavy I/O activity when monitoring via glance, sar, iostat or any disk monitoring tool.


Hakuna Matata.
Ted Buis
Honored Contributor

Re: SCSI max_queue_depth

I think you are likely to get diminishing returns with increasing queue depth beyond 32, but it shouldn't hurt. However, the arrays generally have a max queue depth per controller at the array level, so you I think you want to divide that number (2048 on an EVA if I remember correctly as an example) by the number of LUNs being presented to all hosts by that controller. You don't want to send down more commands than the array can queue. HP-UX queue depth would be per HBA, so you have to take that into account as well. Focus on what the worst case the array could face, not the 256 that HP-UX has for a max depth on it's end.
Mom 6