- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- qdepth and queued IO
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-06-2009 09:24 AM
тАО03-06-2009 09:24 AM
qdepth and queued IO
9000/800/rp8420/ B.11.11
We have noticed a performance issue with our server and contention identified at the disk level(EMC symmtrix disks).
We had two HBAs in the system which talks to the array and noticed there are lot of queued IOs at both the HBA. We added two more HBAs as a work around which helped to get rid of the "queued IO" situation as the new two paths share the load.
The limitation we see from OS side is the qdepth. With a queue depth of eight and two HBAs the server can drive 16 concurrent I/Os to each logical volume/LUN. That is why adding two more HBA helped the situation as the host can drive 32 concurrent IOs now.
root [/home/kumarts] kmtune|grep -i depth
scsi_max_qdepth 8 Y 8
We need your advice on how to tune this at the OS level with HP's best practices.So that we can have the advantage w/o additional paths
Let me know if any more details are required
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2009 10:05 AM
тАО03-09-2009 10:05 AM
Re: qdepth and queued IO
http://searchoracle.techtarget.com/generic/0,295582,sid41_gci1050506,00.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2009 10:14 AM
тАО03-09-2009 10:14 AM
Re: qdepth and queued IO
Performance problems like this on EMC presented disks generally come from two areas:
1) EMC settings. At my last job the customer let EMC make configuration changes without documentation and suddenly performance got poor. If EMC has been allowed to change array configuration, it needs to be documented and possibly reversed.
2) Incorrect LUN configuration. Running a database needing heavy writes on a raid five partition vastly slows down writes. Changing configuration to raid 1 or raid 1/0 for index data and redo (Oracle terms) often solves these problems.
There is some improvement that can be found in the kernel configuration, but these other areas do merit attention.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2009 11:01 AM
тАО03-09-2009 11:01 AM
Re: qdepth and queued IO
I suspect that the EMC can handle more than it is requested to do, constrained by the max queue depth.
Why not try to increase scsi_max-qdepth either overall, or just for those HBA's
Try a jump to 30, but going to 60 or 100 may well be reasonable.
Good luck,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2009 11:32 AM
тАО03-09-2009 11:32 AM
Re: qdepth and queued IO
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2009 12:19 PM
тАО03-09-2009 12:19 PM
Re: qdepth and queued IO
The optimal sustainable queue depth is really a function of the controllers capabilities to juggle things, not an HBA function.
There is however a subtle 'fairness' factor involving the HBA so you may want to take that in consideration a little bit. When the HBA approaches throughput saturation (anything over say 50% designed speed) then an artifically high queue depth for one (group of) disk(s) may end up limiting access to the HBA for users of other disks.
While we zoom in on the details... Teh intro line mentions "contention identified at the disk level(EMC symmtrix disks).". Did you actually mean the real disks behind the controllers, or the luns presented by those controllers.
In other words, where/how was the problem identified... using HPUX tools 'pre-HBA' or using EMC tools, on the controller?
Regards,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-10-2009 01:19 AM
тАО03-10-2009 01:19 AM
Re: qdepth and queued IO
i) Understand how many FA ports your IO is spread across on the Symmetrix
ii) Get an indication from EMC on how many outstanding IOs each FA port can handle (good luck on that one!)
iii) Determine if any other hosts are using the same FAs and look at the number of LUNs and q deoths for each LUN on those other hosts.
iv) Now you have some simple math to do. Make sure that the total load that all hosts can place on any given FA port is no higher than the product of the LUNs on that port and the q depths of the LUNs on that port. So for exaqmple if I have 16 LUNs presented out of one FA port and the port can handle 1024 outstanding IOs then I could set my q depth to 64 for all 16 LUNs (remember to take all IO paths into account - so if I have 2 HBAs attached to a fabric both of which can talk to the same FA port then the two paths count as seperate LUNs.)
You can alter SCSI q depths per LUN using the scsictl command. I seem to recall this is non-persistent across reboots though, so you'll aslo need to implement it in a startup script somewhere.
I don't know if you also have EMC PowerPath on this system and what difference that might make to all this - a question for EMC I guess.
Whether all this will make much real difference to performance, I'm not so sure... generally this only helps when there's a lot of asynchronous IO going on (otherwise processes are waiting for IO to complete anyway before issuing more IO).
HTH
Duncan
I am an HPE Employee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-10-2009 11:52 AM
тАО03-10-2009 11:52 AM
Re: qdepth and queued IO
I see host's HBA level and array controller level as two separate things. (just to make sure that the controller!=HBA)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-11-2009 02:32 AM
тАО03-11-2009 02:32 AM
Re: qdepth and queued IO
[/root] symmask hba list
Identifier Type Adapter Physical Device Path Dir:P
---------------- ----- ---------------- ------------------------ -----
50060b00000b3510 Fibre 0-0-8-1-0 /dev/rdsk/c96t0d0 04D:0
/dev/rdsk/c108t0d0 04A:0
/dev/rdsk/c110t0d0 04B:0
50060b00000b32ae Fibre 0-0-10-1-0 /dev/rdsk/c98t0d0 13D:0
/dev/rdsk/c112t0d0 13A:0
/dev/rdsk/c114t0d0 13B:0
Later added two more cards.But still the FA (13 and 4) remains same. That would lead to me that there is no contention at FA/controller level
50060b00000af328 Fibre 1-0-10-1-0 /dev/rdsk/c120t0d0 04A:0
/dev/rdsk/c122t0d0 04B:0
50060b00000b3c4c Fibre 1-0-8-1-0 /dev/rdsk/c116t0d0 13A:0
/dev/rdsk/c118t0d0 13B:0
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-11-2009 02:40 AM
тАО03-11-2009 02:40 AM
Re: qdepth and queued IO
It's not clear what you are now asking for? What help do you still require here?
HTH
Duncan
I am an HPE Employee