HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- ENQUIRY: scsi_max_qdepth
Operating System - HP-UX
1833151
Members
3429
Online
110051
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-27-2006 12:27 PM
09-27-2006 12:27 PM
Hello dear colleagues,
I have a Perl scipt with about 200 tests
to check the build and status of a given
HP-UX servers. These are the test that matter to me and come from many years of support:
http://www.circlingcycle.com.au/Unix-sources/Unix-scripts.html
I am trying to add a test for
scsi_max_qdepth but miss a good (authoritative)
standard. I searched the newsgropusp, asked
many SAN admins, checked the ITRC forums...
Still, I am not convinced what to do.
A) According to a friend who sent me an email recently, when an HP-UX file system is above 100 GB the scsi_max_qdepth should be set to 32 (instead of default 8).
B) On the other hand, I found this reference in an HP KBR:
Subject: KNOWLEDGE-BASE: HP KBRC00016402 scsi_max_qdepth
USAMKBRC00016402 This document is for HP internal use only
This document has been certified
When should the scsi queue depth be changed for a device.
Document Information Table
When should the scsi queue depth be changed for a device.
DocId: USAMKBRC00016402 Updated: 2/25/05 7:00:00 PM
PROBLEM
When should the scsi queue depth be changed for a device
RESOLUTION
The scsi queue depth can be changed in one of two ways, system wide and on a per device level.
The kernel parameter scsi_max_qdepth is set to a default value of 8 and is a dynamic tunable at 11.11. This is a system wide parameter and will change the default value for all devices.
Here is an example of the queue depth setting for the boot drive on a system after changing the scsi_max_qdepth using sam.
# scsictl -a /dev/rdsk/c0t6d0
immediate_report = 0; queue_depth = 12
This can also be changed on a device level by using the scsictl command. It is important to note that this change will not survive a reboot, however a script could be used to reset the values after a reboot. The following example changes the queue depth to 14 for device c0t6d0:
# scsictl -m queue_depth=14 /dev/rdsk/c0t6d0 # scsictl -a /dev/rdsk/c0t6d0 immediate_report = 0; queue_depth = 14
The following is the maximum value for this parameter both on a device level and system wide level.
# scsictl -m queue_depth=255 /dev/rdsk/c0t6d0
It is important to remember most controllers can only support a maximum value of 256.
This means that the total of 256 per controller is spread across all devices on the card.
It is possible to increase the value on individual devices and have others starved out because the maximum value has been reached by the controller. Note this is simultaneous value and unlikely that all devices would add up to 256 at the same time. However this could be a issue if there are 100's of devices on a particular card.
This parameter can improve performance for the device, however the device by queuing up simultaneous IO's to a device assuming the device can handle the value. The disk manufacture is the best source to determine what this value can be set too. In addition if this value is too high for the device queue full messages can be received in the syslog and performance could suffer. This could also result in delayed writes.
It is important to check with the manufacture before making a change is this value. Note most large disk arrays have all IO's go to on board cache and can handle a large disk queue.
However in most cases the default value is adequate.
C) Yet, another IBM document states this:
Setting the queue depth for the HP-UX operating system with SCSI adapters
Prerequisites: Before you set the queue depth, connect the host system to the ESS.
See General information about attaching to an open-systems host with SCSI adapters on page 13.
Steps: Perform the following steps to set the queue depth on an HP-UX operating system:
1. Use the following formula to set the queue depth for all classes of HP-UX:
256 ÷ maximum number of LUNs = queue depth
Note: Although this algorithm implies that the upper limit for the number of LUNs on an adapter is 256, HP-UX supports up to 1024 LUNs.
2. You must monitor configurations with greater than 256 LUNs.
You must adjust the queue depth for optimum performance.
3. To update the queue depth by device level, type the following command:
scsictl -m queue_depth=21 /dev/rdsk/$dsksf
where /dev/rdsk/$dsksf is the device node.
4. To make a global change to the queue depth, edit the kernel parameter so that it equals scsi_max_qd.
The EVA as do any other "large LUN" presentation arrays will require you to increase queue depth either globally (via kernel parm scsi_max_qdepth) or per LUN via scsictl command.
However, the arrays generally have a max queue depth per controller at the array level, so you I think you want to divide that number (2048 on an EVA if I remember correctly as an example) by the number of LUNs being presented to all hosts by that controller.
You don't want to send down more commands than the array can queue. HP-UX queue depth would be per HBA, so you have to take that into account as well. Focus on what the worst case the array could face, not the 256 that HP-UX has for a max depth on it's end.
D) Another person reported this in the newsgroup:
"High" Q Depth Settings (in my standards q depths greater than 16) mostly apply to environments that adopt very large LUN size standards and adopt a policy of a filesystem or Server Volume Unit per LUN.
For XP 1024 (or XP12000)? The per port maximum is I think still 1024. That is why for this kind of array - I always stripe accross LUNs presented on different front-end ports and still bump up q depth from the 11i default of 16. Hitachi suggested staying at 8 on our 9960 (XP512 equivalent)...
Basically, I am trying to add a safe test in the OAT script.
Maybe, to get started I should simply check if there are more than 256 LUNs presented through a specific controller and warn if that is the case?
Do you have a standard you follow? ARe any
of you admins worrying about it, or you
simply wait for pronlems and then make
corrective actions.
Thank you for any comments,
VK2COT
I have a Perl scipt with about 200 tests
to check the build and status of a given
HP-UX servers. These are the test that matter to me and come from many years of support:
http://www.circlingcycle.com.au/Unix-sources/Unix-scripts.html
I am trying to add a test for
scsi_max_qdepth but miss a good (authoritative)
standard. I searched the newsgropusp, asked
many SAN admins, checked the ITRC forums...
Still, I am not convinced what to do.
A) According to a friend who sent me an email recently, when an HP-UX file system is above 100 GB the scsi_max_qdepth should be set to 32 (instead of default 8).
B) On the other hand, I found this reference in an HP KBR:
Subject: KNOWLEDGE-BASE: HP KBRC00016402 scsi_max_qdepth
USAMKBRC00016402 This document is for HP internal use only
This document has been certified
When should the scsi queue depth be changed for a device.
Document Information Table
When should the scsi queue depth be changed for a device.
DocId: USAMKBRC00016402 Updated: 2/25/05 7:00:00 PM
PROBLEM
When should the scsi queue depth be changed for a device
RESOLUTION
The scsi queue depth can be changed in one of two ways, system wide and on a per device level.
The kernel parameter scsi_max_qdepth is set to a default value of 8 and is a dynamic tunable at 11.11. This is a system wide parameter and will change the default value for all devices.
Here is an example of the queue depth setting for the boot drive on a system after changing the scsi_max_qdepth using sam.
# scsictl -a /dev/rdsk/c0t6d0
immediate_report = 0; queue_depth = 12
This can also be changed on a device level by using the scsictl command. It is important to note that this change will not survive a reboot, however a script could be used to reset the values after a reboot. The following example changes the queue depth to 14 for device c0t6d0:
# scsictl -m queue_depth=14 /dev/rdsk/c0t6d0 # scsictl -a /dev/rdsk/c0t6d0 immediate_report = 0; queue_depth = 14
The following is the maximum value for this parameter both on a device level and system wide level.
# scsictl -m queue_depth=255 /dev/rdsk/c0t6d0
It is important to remember most controllers can only support a maximum value of 256.
This means that the total of 256 per controller is spread across all devices on the card.
It is possible to increase the value on individual devices and have others starved out because the maximum value has been reached by the controller. Note this is simultaneous value and unlikely that all devices would add up to 256 at the same time. However this could be a issue if there are 100's of devices on a particular card.
This parameter can improve performance for the device, however the device by queuing up simultaneous IO's to a device assuming the device can handle the value. The disk manufacture is the best source to determine what this value can be set too. In addition if this value is too high for the device queue full messages can be received in the syslog and performance could suffer. This could also result in delayed writes.
It is important to check with the manufacture before making a change is this value. Note most large disk arrays have all IO's go to on board cache and can handle a large disk queue.
However in most cases the default value is adequate.
C) Yet, another IBM document states this:
Setting the queue depth for the HP-UX operating system with SCSI adapters
Prerequisites: Before you set the queue depth, connect the host system to the ESS.
See General information about attaching to an open-systems host with SCSI adapters on page 13.
Steps: Perform the following steps to set the queue depth on an HP-UX operating system:
1. Use the following formula to set the queue depth for all classes of HP-UX:
256 ÷ maximum number of LUNs = queue depth
Note: Although this algorithm implies that the upper limit for the number of LUNs on an adapter is 256, HP-UX supports up to 1024 LUNs.
2. You must monitor configurations with greater than 256 LUNs.
You must adjust the queue depth for optimum performance.
3. To update the queue depth by device level, type the following command:
scsictl -m queue_depth=21 /dev/rdsk/$dsksf
where /dev/rdsk/$dsksf is the device node.
4. To make a global change to the queue depth, edit the kernel parameter so that it equals scsi_max_qd.
The EVA as do any other "large LUN" presentation arrays will require you to increase queue depth either globally (via kernel parm scsi_max_qdepth) or per LUN via scsictl command.
However, the arrays generally have a max queue depth per controller at the array level, so you I think you want to divide that number (2048 on an EVA if I remember correctly as an example) by the number of LUNs being presented to all hosts by that controller.
You don't want to send down more commands than the array can queue. HP-UX queue depth would be per HBA, so you have to take that into account as well. Focus on what the worst case the array could face, not the 256 that HP-UX has for a max depth on it's end.
D) Another person reported this in the newsgroup:
"High" Q Depth Settings (in my standards q depths greater than 16) mostly apply to environments that adopt very large LUN size standards and adopt a policy of a filesystem or Server Volume Unit per LUN.
For XP 1024 (or XP12000)? The per port maximum is I think still 1024. That is why for this kind of array - I always stripe accross LUNs presented on different front-end ports and still bump up q depth from the 11i default of 16. Hitachi suggested staying at 8 on our 9960 (XP512 equivalent)...
Basically, I am trying to add a safe test in the OAT script.
Maybe, to get started I should simply check if there are more than 256 LUNs presented through a specific controller and warn if that is the case?
Do you have a standard you follow? ARe any
of you admins worrying about it, or you
simply wait for pronlems and then make
corrective actions.
Thank you for any comments,
VK2COT
VK2COT - Dusan Baljevic
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-27-2006 03:14 PM
09-27-2006 03:14 PM
Solution
You are not goign to get a single answer.
The answer 'depends'.
To quote from:
http://techsolutions.hp.com/en/B2355-60105/scsi_max_qdepth.5.html
"The number of commands that can be outstanding varies by device, and is not known to HP-UX."
So here is a single parameter trying to control something which varies per devices.
Anything you do will be a comprimise.
The default 8 is middle of the road, lowish.
It is clearly NOT enough for a single (large)LUN on a large (XP, EMC, EVA) controller with lots of disks behind it. For best performance one should be able to always have an IO in the queue for each disks.
To me this suggests a value of 50+ for this case.
However, if a system mangler decides to present 50+ small (10gb?) LUN from the same pool of disks, then even the default becomes a good few IOs to keep track of for the controller (50*8=400).
What to do in a mixed application setup where some part calls for a few big luns, and an other calls for lots of little luns?!
[Setup the box as virtual server with each hpux client tuned for the application it runs :-]
I suspect hat all you can do in general is to report any excess. Here that probably means a queue depth less than 8 or larger then 100 (arbitrary number). The script should not report an excess as wrong, just highlight it for review.
fwiw,
Hein.
The answer 'depends'.
To quote from:
http://techsolutions.hp.com/en/B2355-60105/scsi_max_qdepth.5.html
"The number of commands that can be outstanding varies by device, and is not known to HP-UX."
So here is a single parameter trying to control something which varies per devices.
Anything you do will be a comprimise.
The default 8 is middle of the road, lowish.
It is clearly NOT enough for a single (large)LUN on a large (XP, EMC, EVA) controller with lots of disks behind it. For best performance one should be able to always have an IO in the queue for each disks.
To me this suggests a value of 50+ for this case.
However, if a system mangler decides to present 50+ small (10gb?) LUN from the same pool of disks, then even the default becomes a good few IOs to keep track of for the controller (50*8=400).
What to do in a mixed application setup where some part calls for a few big luns, and an other calls for lots of little luns?!
[Setup the box as virtual server with each hpux client tuned for the application it runs :-]
I suspect hat all you can do in general is to report any excess. Here that probably means a queue depth less than 8 or larger then 100 (arbitrary number). The script should not report an excess as wrong, just highlight it for review.
fwiw,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-28-2006 12:49 PM
09-28-2006 12:49 PM
Re: ENQUIRY: scsi_max_qdepth
Hello Hein and others,
Thansk for your comments. In the end,
I added the scscictl tests in my OAT script
and also a check for scs-max-qdepth.
Here is what it typically reports when running:
PASS PV /dev/dsk/c0t0d0 available
INFO PV /dev/dsk/c0t0d0 queue depth
immediate_report = 1; queue_depth = 8
PASS PV /dev/dsk/c0t0d0 defined in LVM (/etc/lvmtab)
PASS PV /dev/dsk/c0t1d0 available
INFO PV /dev/dsk/c0t1d0 queue depth
immediate_report = 1; queue_depth = 8
PASS PV /dev/dsk/c0t1d0 defined in LVM (/etc/lvmtab)
...
PASS PV /dev/dsk/c3t0d0 has alternate link /dev/dsk/c21t0d0
PASS PV /dev/dsk/c3t0d0 available
INFO PV /dev/dsk/c3t0d0 queue depth
immediate_report = 1; queue_depth = 8
PASS PV /dev/dsk/c3t0d0 defined in LVM (/etc/lvmtab)
...
PASS Kernel parameter scsi_max_qdepth set to 8 (recommended minimum is 8)
Best wishes,
VK2COT
Thansk for your comments. In the end,
I added the scscictl tests in my OAT script
and also a check for scs-max-qdepth.
Here is what it typically reports when running:
PASS PV /dev/dsk/c0t0d0 available
INFO PV /dev/dsk/c0t0d0 queue depth
immediate_report = 1; queue_depth = 8
PASS PV /dev/dsk/c0t0d0 defined in LVM (/etc/lvmtab)
PASS PV /dev/dsk/c0t1d0 available
INFO PV /dev/dsk/c0t1d0 queue depth
immediate_report = 1; queue_depth = 8
PASS PV /dev/dsk/c0t1d0 defined in LVM (/etc/lvmtab)
...
PASS PV /dev/dsk/c3t0d0 has alternate link /dev/dsk/c21t0d0
PASS PV /dev/dsk/c3t0d0 available
INFO PV /dev/dsk/c3t0d0 queue depth
immediate_report = 1; queue_depth = 8
PASS PV /dev/dsk/c3t0d0 defined in LVM (/etc/lvmtab)
...
PASS Kernel parameter scsi_max_qdepth set to 8 (recommended minimum is 8)
Best wishes,
VK2COT
VK2COT - Dusan Baljevic
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP