<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Wait I/O in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-i-o/m-p/3483039#M213656</link>
    <description>J,&lt;BR /&gt;to quote Vincent Fleming form HP:&lt;BR /&gt;"Processes waiting on I/O spin; which means that when they get a timeslice to run, they check if the I/O has completed, and if not, it idles until the timeslice expires, in the hopes that the I/O will complete before the timeslice ends. This behavior consumes CPU time.&lt;BR /&gt;&lt;BR /&gt;WAIT IO is a measurement of this CPU consumption.&lt;BR /&gt;&lt;BR /&gt;Now, WAIT IO time can be caused by several factors. The most common cause is that the disk array is overloaded, or you have configured it in a non-optimal way - such as putting your logs and dataspaces on a single mirror pair.&lt;BR /&gt;&lt;BR /&gt;So, if you are seeing high WAIT IO (over 10% is high in my opinion), you need to take a good look at your disk array and it's configuration."&lt;BR /&gt;&lt;BR /&gt;Hope this is the answer you are looking for.&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Thu, 10 Feb 2005 10:43:53 GMT</pubDate>
    <dc:creator>Peter Godron</dc:creator>
    <dc:date>2005-02-10T10:43:53Z</dc:date>
    <item>
      <title>Wait I/O</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-i-o/m-p/3483038#M213655</link>
      <description>What is a wait IO, exactly ?? How a wait IO is created ??</description>
      <pubDate>Thu, 10 Feb 2005 10:37:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-i-o/m-p/3483038#M213655</guid>
      <dc:creator>J. Falissard</dc:creator>
      <dc:date>2005-02-10T10:37:45Z</dc:date>
    </item>
    <item>
      <title>Re: Wait I/O</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-i-o/m-p/3483039#M213656</link>
      <description>J,&lt;BR /&gt;to quote Vincent Fleming form HP:&lt;BR /&gt;"Processes waiting on I/O spin; which means that when they get a timeslice to run, they check if the I/O has completed, and if not, it idles until the timeslice expires, in the hopes that the I/O will complete before the timeslice ends. This behavior consumes CPU time.&lt;BR /&gt;&lt;BR /&gt;WAIT IO is a measurement of this CPU consumption.&lt;BR /&gt;&lt;BR /&gt;Now, WAIT IO time can be caused by several factors. The most common cause is that the disk array is overloaded, or you have configured it in a non-optimal way - such as putting your logs and dataspaces on a single mirror pair.&lt;BR /&gt;&lt;BR /&gt;So, if you are seeing high WAIT IO (over 10% is high in my opinion), you need to take a good look at your disk array and it's configuration."&lt;BR /&gt;&lt;BR /&gt;Hope this is the answer you are looking for.&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 10 Feb 2005 10:43:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-i-o/m-p/3483039#M213656</guid>
      <dc:creator>Peter Godron</dc:creator>
      <dc:date>2005-02-10T10:43:53Z</dc:date>
    </item>
    <item>
      <title>Re: Wait I/O</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-i-o/m-p/3483040#M213657</link>
      <description>to me, waiting on IO is really queuing theory.. think about it..&lt;BR /&gt;&lt;BR /&gt;read( block #3000 )&lt;BR /&gt;&lt;BR /&gt;does the driver service your request immediealy ? no!  there are other requests to be serviced before you.. so you get on the wait list.  on the other hand.. if the driver is smart, and the "block #3000" can be read optimal, then you might get it before someone else's io request.  think of people waiting to get on a bus.. you might get on before someone else.. &lt;BR /&gt;&lt;BR /&gt;but, generally speaking.. there is a "queue" to get on the bus, and your request will be serviced in the order it arrived.&lt;BR /&gt;&lt;BR /&gt;the key is to prevent this "queue" or shrink it by moving some data to other "spindles".  yes, spreading the data accross more spindles can optimize your "wait-on-io".  the thought is, most likely these disks (or spindles) can respond to the disk-controllers requests for "read/write".  in order to get to a disk, you go throught the controller to the disks.&lt;BR /&gt;&lt;BR /&gt;not sure if this helps you, but think "queue" and the "bus".. the bus can be the disk-controller and the "queue" being your IO-request for a block (of course other folks also have read or write requests at the same time.&lt;BR /&gt;&lt;BR /&gt;if you have large wait-io times, this is degrading your performance, you most likely have a so-called "hot-spot" or "hot-disk" area.  search google.com for these terms.&lt;BR /&gt;&lt;BR /&gt;see "sar" command for disk options.</description>
      <pubDate>Fri, 11 Feb 2005 23:56:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-i-o/m-p/3483040#M213657</guid>
      <dc:creator>D Block 2</dc:creator>
      <dc:date>2005-02-11T23:56:26Z</dc:date>
    </item>
  </channel>
</rss>

