GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Wait I/O
Operating System - HP-UX
1848887
Members
4163
Online
104038
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-10-2005 02:37 AM
02-10-2005 02:37 AM
Wait I/O
What is a wait IO, exactly ?? How a wait IO is created ??
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-10-2005 02:43 AM
02-10-2005 02:43 AM
Re: Wait I/O
J,
to quote Vincent Fleming form HP:
"Processes waiting on I/O spin; which means that when they get a timeslice to run, they check if the I/O has completed, and if not, it idles until the timeslice expires, in the hopes that the I/O will complete before the timeslice ends. This behavior consumes CPU time.
WAIT IO is a measurement of this CPU consumption.
Now, WAIT IO time can be caused by several factors. The most common cause is that the disk array is overloaded, or you have configured it in a non-optimal way - such as putting your logs and dataspaces on a single mirror pair.
So, if you are seeing high WAIT IO (over 10% is high in my opinion), you need to take a good look at your disk array and it's configuration."
Hope this is the answer you are looking for.
Regards
to quote Vincent Fleming form HP:
"Processes waiting on I/O spin; which means that when they get a timeslice to run, they check if the I/O has completed, and if not, it idles until the timeslice expires, in the hopes that the I/O will complete before the timeslice ends. This behavior consumes CPU time.
WAIT IO is a measurement of this CPU consumption.
Now, WAIT IO time can be caused by several factors. The most common cause is that the disk array is overloaded, or you have configured it in a non-optimal way - such as putting your logs and dataspaces on a single mirror pair.
So, if you are seeing high WAIT IO (over 10% is high in my opinion), you need to take a good look at your disk array and it's configuration."
Hope this is the answer you are looking for.
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2005 03:56 PM
02-11-2005 03:56 PM
Re: Wait I/O
to me, waiting on IO is really queuing theory.. think about it..
read( block #3000 )
does the driver service your request immediealy ? no! there are other requests to be serviced before you.. so you get on the wait list. on the other hand.. if the driver is smart, and the "block #3000" can be read optimal, then you might get it before someone else's io request. think of people waiting to get on a bus.. you might get on before someone else..
but, generally speaking.. there is a "queue" to get on the bus, and your request will be serviced in the order it arrived.
the key is to prevent this "queue" or shrink it by moving some data to other "spindles". yes, spreading the data accross more spindles can optimize your "wait-on-io". the thought is, most likely these disks (or spindles) can respond to the disk-controllers requests for "read/write". in order to get to a disk, you go throught the controller to the disks.
not sure if this helps you, but think "queue" and the "bus".. the bus can be the disk-controller and the "queue" being your IO-request for a block (of course other folks also have read or write requests at the same time.
if you have large wait-io times, this is degrading your performance, you most likely have a so-called "hot-spot" or "hot-disk" area. search google.com for these terms.
see "sar" command for disk options.
read( block #3000 )
does the driver service your request immediealy ? no! there are other requests to be serviced before you.. so you get on the wait list. on the other hand.. if the driver is smart, and the "block #3000" can be read optimal, then you might get it before someone else's io request. think of people waiting to get on a bus.. you might get on before someone else..
but, generally speaking.. there is a "queue" to get on the bus, and your request will be serviced in the order it arrived.
the key is to prevent this "queue" or shrink it by moving some data to other "spindles". yes, spreading the data accross more spindles can optimize your "wait-on-io". the thought is, most likely these disks (or spindles) can respond to the disk-controllers requests for "read/write". in order to get to a disk, you go throught the controller to the disks.
not sure if this helps you, but think "queue" and the "bus".. the bus can be the disk-controller and the "queue" being your IO-request for a block (of course other folks also have read or write requests at the same time.
if you have large wait-io times, this is degrading your performance, you most likely have a so-called "hot-spot" or "hot-disk" area. search google.com for these terms.
see "sar" command for disk options.
Golf is a Good Walk Spoiled, Mark Twain.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2026 Hewlett Packard Enterprise Development LP