<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: IO wait in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824520#M87350</link>
    <description>Hi Raj,&lt;BR /&gt;&lt;BR /&gt;What are your mount options on the filesystem(s) you are using for the database?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
    <pubDate>Sun, 13 Oct 2002 22:21:42 GMT</pubDate>
    <dc:creator>John Poff</dc:creator>
    <dc:date>2002-10-13T22:21:42Z</dc:date>
    <item>
      <title>IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824515#M87345</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;On my database server we see 40 - 50 %io wait reported by sar. Using glanceplus we verified that everything is normal. What is causing the IO wait? Could it be database design? How to see which file systems are being accessed often? Any suggestions on how to debug further?&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Sat, 12 Oct 2002 19:47:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824515#M87345</guid>
      <dc:creator>SAM_24</dc:creator>
      <dc:date>2002-10-12T19:47:31Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824516#M87346</link>
      <description>&lt;BR /&gt;If I remember correctly, there are some patches to correct sar reporting the wrong IO activity. I'd suggest first using one of the patch bundles available, preferably a newer one.&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry</description>
      <pubDate>Sat, 12 Oct 2002 19:54:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824516#M87346</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2002-10-12T19:54:54Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824517#M87347</link>
      <description>If glance does not agree about the Wait I/O with sar, then try the patch database for patches.&lt;BR /&gt;&lt;BR /&gt;If it does agree, however, then it probably does have something to do with either your database design/layout, or your hardware (disk array) or it's configuation (including LVM config).&lt;BR /&gt;&lt;BR /&gt;Good luck!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 12 Oct 2002 20:40:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824517#M87347</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2002-10-12T20:40:01Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824518#M87348</link>
      <description>No , not only sar.&lt;BR /&gt;&lt;BR /&gt;From vmstat output in b column there is always number of blocked jobs ranging from 10-14. It is not a patch problem. We have up to date patch.&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Sun, 13 Oct 2002 06:47:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824518#M87348</guid>
      <dc:creator>SAM_24</dc:creator>
      <dc:date>2002-10-13T06:47:53Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824519#M87349</link>
      <description>You may wish to tell use a little more information.&lt;BR /&gt;&lt;BR /&gt;OS and patch level&lt;BR /&gt;model of your server&lt;BR /&gt;what type of disk(s) are being utilised&lt;BR /&gt;What type of connectivity</description>
      <pubDate>Sun, 13 Oct 2002 21:30:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824519#M87349</guid>
      <dc:creator>Michael Tully</dc:creator>
      <dc:date>2002-10-13T21:30:06Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824520#M87350</link>
      <description>Hi Raj,&lt;BR /&gt;&lt;BR /&gt;What are your mount options on the filesystem(s) you are using for the database?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Sun, 13 Oct 2002 22:21:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824520#M87350</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-10-13T22:21:42Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824521#M87351</link>
      <description>I'm much interested in the wizards' response to this also, being in a similar situation... Management is used to watching IO wait to project upgrades. Since the last (AL-&amp;gt;Fabric etc.) throughput and reponse seem improved but IO wait has increased.&lt;BR /&gt;&lt;BR /&gt;HP-UX 11.00 March 2002 bundles applied&lt;BR /&gt;N-4000 4 CPUs &amp;lt; 50% 6 GB RAM &lt;BR /&gt;       4 Tachyon A5158a HBAs &lt;BR /&gt;EMC DS16B  (Brocade) FC fabric switches (1GB)&lt;BR /&gt;EMC Symmetrix 8730&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;My Theory-&lt;BR /&gt;   I think the overall latency to satisfy disk IO requests has been reduced. I suspect that sar shows the serialization of the IO's from multiple systems to the Symmetrix FA occurring at the switch level. I don't think the amount of time in IO wait would be noticeable if the system were busier and had something else to do while the switch merges the traffic into single lanes (6 MB peak switch throughput).&lt;BR /&gt;   Now I need to prove what's happening (how?) and find a more valid metric, if sar IO wait is no longer a valid indicator of IO performance.</description>
      <pubDate>Mon, 14 Oct 2002 12:23:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824521#M87351</guid>
      <dc:creator>Kirby A. Joss</dc:creator>
      <dc:date>2002-10-14T12:23:11Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824522#M87352</link>
      <description>&lt;BR /&gt;Hmm, we run EMC and Brocade switches on lots and lots of HP servers of all classes and OS versions (10.20/11/11i) and we never see wio% &amp;gt; single digits (ie. &amp;lt;10) on all servers even at peak times.&lt;BR /&gt;&lt;BR /&gt;Normally a wio% of &amp;lt;10 is fine, 10-20 means your disk subsystem is having trouble keeping up with i/o requests and &amp;gt;20 means your i/o bound and performance is suffering considerably as a result. If you have wio% of 40-50 then either something is wrong with sar reporting it (does glance confirm or not?) and if sar and glance confirm this 40-50 then your completely i/o bound and should be looking into improving your i/o throughput (EMC cache sizes/config/weighting), more channels etc. &lt;BR /&gt;I would certainly be very worried if my wio% was that high.&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Oct 2002 13:44:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824522#M87352</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2002-10-14T13:44:47Z</dc:date>
    </item>
    <item>
      <title>Re: IO wait</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824523#M87353</link>
      <description>Performance management is the pursuit of bottlenecks.  Once you clear one bottleneck, your system will bottleneck on something else - ALWAYS.  The trick is to get the system to bottleneck on the CPU speed... not something like a lack of memory (which causes paging), disk i/o, LAN, etc.  So, you optimize your perhipherals as best you can, removing all bottlenecks until you are CPU bound.  &lt;BR /&gt;&lt;BR /&gt;When you are CPU bound, your system is going as fast as it can.&lt;BR /&gt;&lt;BR /&gt;So, I think you guys misunderstand what WAIT IO means...&lt;BR /&gt;&lt;BR /&gt;Processes waiting on I/O spin; which means that when they get a timeslice to run, they check if the I/O has completed, and if not, it idles until the timeslice expires, in the hopes that the I/O will complete before the timeslice ends.  This behavior consumes CPU time.&lt;BR /&gt;&lt;BR /&gt;WAIT IO is a measurement of this CPU consumption.&lt;BR /&gt;&lt;BR /&gt;Now, WAIT IO time can be caused by several factors.  The most common cause is that the disk array is overloaded, or you have configured it in a non-optimal way - such as putting your logs and dataspaces on a single mirror pair.&lt;BR /&gt;&lt;BR /&gt;So, if you are seeing high WAIT IO (over 10% is high in my opinion), you need to take a good look at your disk array and it's configuration.&lt;BR /&gt;&lt;BR /&gt;You may not have striped over a sufficient number of volumes (not using enough drives), or the disk array may have an internal bottleneck, such as too many systems hitting the same FC port, backplane bottleneck, etc.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Let us know how you make out.&lt;BR /&gt;&lt;BR /&gt;Good luck!&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Oct 2002 14:00:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/io-wait/m-p/2824523#M87353</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2002-10-14T14:00:23Z</dc:date>
    </item>
  </channel>
</rss>

