- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- fcsmutil queue stats
Operating System - HP-UX
1823059
Members
3111
Online
109645
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-26-2006 03:02 AM
тАО09-26-2006 03:02 AM
fcsmutil queue stats
Hi there
I have a question of a theoretical nature - hope you can help.
I'm trying to determine which queues are involved in the I/O subsystem and what their capacities are. I hope I've got the sequence right.
When an I/O reaches the device driver, it enters the "wait queue" which has no limit, as far as I know.
When the I/O is processed by the device driver, it enters the the "active queue". Is there a limit on the number of I/Os that can be "in flight"? If so, what determines this value?
On the HBA level, I found the following statistics:
#fcmsutil /dev/fcd0 stat
reports a "Request Queue Full" value. Does this value indicate the number of I/O requests that the HBA can accept from the device driver? If so, what is the limit? If not, where does this fit into the picture?
#fcmsutil /dev/fcd0 devstat *nport_id*
reports a "Send failure - Request queue was full" value. How does this differ from the request queue reported above? Again, what are the limits?
The above command also reports an "IOs failed due to SCSI queue full" value. It almost sounds if this queue is on a higher level than the request queues above. Where does this queue fit into the picture? What are the limits?
I know these are probably not very practical questions and may even be considered silly, but I would REALLY like to understand the I/O subsystem down to the bits and bytes level. Even a hint as to where I can find some more information, would be appreciated.
I have a question of a theoretical nature - hope you can help.
I'm trying to determine which queues are involved in the I/O subsystem and what their capacities are. I hope I've got the sequence right.
When an I/O reaches the device driver, it enters the "wait queue" which has no limit, as far as I know.
When the I/O is processed by the device driver, it enters the the "active queue". Is there a limit on the number of I/Os that can be "in flight"? If so, what determines this value?
On the HBA level, I found the following statistics:
#fcmsutil /dev/fcd0 stat
reports a "Request Queue Full" value. Does this value indicate the number of I/O requests that the HBA can accept from the device driver? If so, what is the limit? If not, where does this fit into the picture?
#fcmsutil /dev/fcd0 devstat *nport_id*
reports a "Send failure - Request queue was full" value. How does this differ from the request queue reported above? Again, what are the limits?
The above command also reports an "IOs failed due to SCSI queue full" value. It almost sounds if this queue is on a higher level than the request queues above. Where does this queue fit into the picture? What are the limits?
I know these are probably not very practical questions and may even be considered silly, but I would REALLY like to understand the I/O subsystem down to the bits and bytes level. Even a hint as to where I can find some more information, would be appreciated.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-26-2006 03:13 AM
тАО09-26-2006 03:13 AM
Re: fcsmutil queue stats
Shalom,
There is no such thing as an endless queue. The card has limits as to how many requests it can hold. The OS will act as a buffer for i/o requests as well.
This part of your question:
>>>
reports a "Send failure - Request queue was full" value. How does this differ from the request queue reported above? Again, what are the limits?
The above command also reports an "IOs failed due to SCSI queue full" value. It almost sounds if this queue is on a higher level than the request queues above. Where does this queue fit into the picture? What are the limits?
>>>
Indicates to me that your card is running at or near capacity. You may find that there are high i/o wait times. To get that you need glance/gpm or the non-graphical equivalent,
http://www.hpux.ws/system.perf.sh
SEP
There is no such thing as an endless queue. The card has limits as to how many requests it can hold. The OS will act as a buffer for i/o requests as well.
This part of your question:
>>>
reports a "Send failure - Request queue was full" value. How does this differ from the request queue reported above? Again, what are the limits?
The above command also reports an "IOs failed due to SCSI queue full" value. It almost sounds if this queue is on a higher level than the request queues above. Where does this queue fit into the picture? What are the limits?
>>>
Indicates to me that your card is running at or near capacity. You may find that there are high i/o wait times. To get that you need glance/gpm or the non-graphical equivalent,
http://www.hpux.ws/system.perf.sh
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-26-2006 11:53 PM
тАО09-26-2006 11:53 PM
Re: fcsmutil queue stats
Thanks SEP
If there is no such thing as an endless queue, what is the limit of the wait queue?
And the active queue?
The values reported by fcmsutil are all 0 for the metrics metioned above, which means that the card has never encountered any of these conditions.
What I'm really looking for, is what the limits of these queues are, and also what the differences are between the request queue for the card (stat), the request queue for the nport (devstat) and the SCSI queue.
Thanks again.
If there is no such thing as an endless queue, what is the limit of the wait queue?
And the active queue?
The values reported by fcmsutil are all 0 for the metrics metioned above, which means that the card has never encountered any of these conditions.
What I'm really looking for, is what the limits of these queues are, and also what the differences are between the request queue for the card (stat), the request queue for the nport (devstat) and the SCSI queue.
Thanks again.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP