<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Wait IO in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390894#M865366</link>
    <description>Hi,&lt;BR /&gt;Go thr' this doc as well...&lt;BR /&gt;Regards,</description>
    <pubDate>Fri, 01 Oct 2004 06:21:42 GMT</pubDate>
    <dc:creator>Bharat Katkar</dc:creator>
    <dc:date>2004-10-01T06:21:42Z</dc:date>
    <item>
      <title>Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390890#M865362</link>
      <description>Specs:&lt;BR /&gt;OS:     HP UX 11.11 64 bit&lt;BR /&gt;MODEL:  9000/800/S16K-A&lt;BR /&gt;CPU:    4 @ 875MHz&lt;BR /&gt;Memory: 12096MB &lt;BR /&gt;AUTO RAID: HP Sure Store Virtual Array&lt;BR /&gt;Problem from Oracle DBA:&lt;BR /&gt;my understanding is that report processing time is taking longer (extracting data from disks and/or processing it).  There may also be OLTP slowness (things like navigating forms, manipulating old data, and inserting new stuff). &lt;BR /&gt;From the DBA perspective, especially during heavy-use times like next week, the wait io contributes significantly to the maxed-out CPU utilization for extended periods.  Database statistics indicate that our greatest addressable problem is IO contention.  This week I'm also noticing more rollback waits than I've seen previously.&lt;BR /&gt; &lt;BR /&gt;Wait IO is consistently higher on the system than I've experienced on other UNIX servers ?&lt;BR /&gt;&lt;BR /&gt;I would like to know if there are any ideas on how or where to tackle with this. The DBA is recommending to ditch HP's auto-RAID and do straight striping and mirroring?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 30 Sep 2004 12:01:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390890#M865362</guid>
      <dc:creator>Global Server Operation</dc:creator>
      <dc:date>2004-09-30T12:01:13Z</dc:date>
    </item>
    <item>
      <title>Re: Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390891#M865363</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;usually we're better off when we have a baseline for system performance (and database performance) as this would give you the hints where to look at.&lt;BR /&gt;Would you have these performance results/reports you could use.&lt;BR /&gt;&lt;BR /&gt;Also how is the system performing when users complain ?&lt;BR /&gt;&lt;BR /&gt;a few threads on VA but there are more around.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=616937" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=616937&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=628299" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=628299&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=183732" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=183732&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Jean-Luc</description>
      <pubDate>Thu, 30 Sep 2004 12:28:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390891#M865363</guid>
      <dc:creator>Jean-Luc Oudart</dc:creator>
      <dc:date>2004-09-30T12:28:10Z</dc:date>
    </item>
    <item>
      <title>Re: Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390892#M865364</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;This is most likely a bottleneck on the system itself. &lt;BR /&gt;&lt;BR /&gt;See if you can put some of the LUNs on the alternate path. Look at 'sar -d 5 20' output and see if only path is active all the time. In that case, make some of the alternate paths as primary and see if you get any relief. This isn't qualified as true load balancing but you are trying to use the alternate path as much as possible. Say primary path now is c6t0d1 and the alternate path is c8t0d1. To make c8t0d1 as the primary path do&lt;BR /&gt;&lt;BR /&gt;vgreduce vgxx /dev/dsk/c6t0d1&lt;BR /&gt;vgextend vgxx /dev/dsk/c6t0d1&lt;BR /&gt;&lt;BR /&gt;Post your 'sar -d 5 5' output followed by 'sar -b 5 5' and 'sar 5 5'.&lt;BR /&gt;&lt;BR /&gt;-Sri&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 30 Sep 2004 17:38:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390892#M865364</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2004-09-30T17:38:18Z</dc:date>
    </item>
    <item>
      <title>Re: Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390893#M865365</link>
      <description>We had a similar problem with a VA7410 and rp8440 (12x875Mhz). &lt;BR /&gt;Basically the computer is too fast for the disk array and is capable of hammering individual luns at too-fast a rate for the array to keep up.  &lt;BR /&gt;Typical causes of this are: using too few disks of too-large-a-size; having a full array so that all the data access is at the centre of all the disks; using slower spin-speed disks, such as 10000rpms; sending all the i/o down one lun at once. &lt;BR /&gt;Stick with the 15k rpms at all times and dont accept financial arguments to the contrary.  Having mixed spin speeds caused major performance problems with us and it ended up costing more  - a 146Gb disk at 10krpm has to do twice the work of two 73Gb disks at 15kprm and its only 2 thirds of the spin speed - thats quite a penalty.&lt;BR /&gt;A good way to alleviate your bottleneck and i/o wait problem is to re-create each logical volume as striped across two luns, one lun in each RG.&lt;BR /&gt;Next time I wont get anything less than an xp array.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 01 Oct 2004 04:55:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390893#M865365</guid>
      <dc:creator>Steve Lewis</dc:creator>
      <dc:date>2004-10-01T04:55:31Z</dc:date>
    </item>
    <item>
      <title>Re: Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390894#M865366</link>
      <description>Hi,&lt;BR /&gt;Go thr' this doc as well...&lt;BR /&gt;Regards,</description>
      <pubDate>Fri, 01 Oct 2004 06:21:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390894#M865366</guid>
      <dc:creator>Bharat Katkar</dc:creator>
      <dc:date>2004-10-01T06:21:42Z</dc:date>
    </item>
    <item>
      <title>Re: Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390895#M865367</link>
      <description>Auto raid performs well, when you take cre of certain things. It is always a good idea to leave nearly 30-40 % of the space unconfigured on autoraid. What happens with this is "the data which is read most of the time is kept in one raid level" while the reminaing data is kept in another raid level.&lt;BR /&gt;&lt;BR /&gt;So you need to check how much space you have configured and how much is free. The other thing that you need to check is how the paths have been configured. Check alternate and primary pats and try adjusting them as Shridhar suggested.&lt;BR /&gt;&lt;BR /&gt;Re-arranging the raid would be a big exercise. You would require to back up the data, set raid and then restore. We have a 7400 configured in 0+1 (stripping and mirroring). With this half of the space is unused. The cost of the per GB is high, but offers good performance. You may also want to have a look at how sql code is configured?? Is sql code efficient, is doing un-necessary read/writes???&lt;BR /&gt;&lt;BR /&gt;You may also want to frequently read data to seperate VG. Glance would be helpful in this regard. glance -i would give you IO by file system.&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;&lt;BR /&gt;Anil</description>
      <pubDate>Fri, 01 Oct 2004 07:33:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390895#M865367</guid>
      <dc:creator>RAC_1</dc:creator>
      <dc:date>2004-10-01T07:33:00Z</dc:date>
    </item>
    <item>
      <title>Re: Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390896#M865368</link>
      <description>&lt;BR /&gt;The name of the game for oracle storage  is S.A.M.E (Stripe And Mirror Everything). In your case since the LUNs are already protected - try striping them a minimum of 4 ways with a width of 64K to 128K. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 01 Oct 2004 08:10:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390896#M865368</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2004-10-01T08:10:06Z</dc:date>
    </item>
    <item>
      <title>Re: Wait IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390897#M865369</link>
      <description>Can you send a copy of the statspack report that the DBA is looking at?  Maybe we isolate the issue to just a particular set of data files.  Once we know which ones they are we can then see which disks they're on.  We can also try and trace the process' UNIX system calls using the tusc utility to show exactly where the slowness is being experienced.&lt;BR /&gt;&lt;BR /&gt;Alwyn</description>
      <pubDate>Fri, 01 Oct 2004 15:42:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/wait-io/m-p/3390897#M865369</guid>
      <dc:creator>Alwyn Santos</dc:creator>
      <dc:date>2004-10-01T15:42:22Z</dc:date>
    </item>
  </channel>
</rss>

