<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: any problem with the high wio% in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881618#M101066</link>
    <description>Have a look at Bill Hassel's reply in a very similar question here;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x3da54a988422d711abdc0090277a778c,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x3da54a988422d711abdc0090277a778c,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;With a wio% that high its definitely an i/o issue, the question is what to do to isolate or fix it.&lt;BR /&gt;&lt;BR /&gt;With 2Gb connections to the EMC theyre not going to be the problem, the problem is either the EMC is having trouble keeping up with the large number of i/o requests - and EMC can check this for you and tell if you if certain physical disks in the Symmetrix are thrashing (Optimizer can report and fix it), or, in my opinion, you have too many i/o requests going to particlular EMC luns (/dev/dsk/.. entries).&lt;BR /&gt;&lt;BR /&gt;Certainly something to try is stripe your lvols across all available channels and devices to even out the io load - this should increase throughput considerably - unless the problem is at the EMC end. You need to investigate both possibilities.&lt;BR /&gt;</description>
    <pubDate>Wed, 15 Jan 2003 14:19:34 GMT</pubDate>
    <dc:creator>Stefan Farrelly</dc:creator>
    <dc:date>2003-01-15T14:19:34Z</dc:date>
    <item>
      <title>any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881603#M101051</link>
      <description>A customer complained that their wio% is high(50%).&lt;BR /&gt;&lt;BR /&gt;Here is my sar output(average)&lt;BR /&gt; %usr    %sys    %wio   %idle&lt;BR /&gt;Average       35       8      55       2&lt;BR /&gt;         runq-sz %runocc swpq-sz %swpocc&lt;BR /&gt;Average      1.2       6     0.0       0&lt;BR /&gt;&lt;BR /&gt;         bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s&lt;BR /&gt;Average        0    4521     100       7      44      84    3519     293&lt;BR /&gt;&lt;BR /&gt;         swpin/s bswin/s swpot/s bswot/s pswch/s&lt;BR /&gt;Average     0.00     0.0    0.00     0.0   14173&lt;BR /&gt;&lt;BR /&gt;         scall/s  sread/s  swrit/s   fork/s   exec/s  rchar/s  wchar/s&lt;BR /&gt;Average    35173     8372     3540    13.08    11.67  9280218    29881&lt;BR /&gt;&lt;BR /&gt;          iget/s namei/s dirbk/s&lt;BR /&gt;Average        4     557       0&lt;BR /&gt;&lt;BR /&gt;         rawch/s canch/s outch/s rcvin/s xmtin/s mdmin/s&lt;BR /&gt;Average        0       0       0       0       0       0&lt;BR /&gt;          msg/s  sema/s&lt;BR /&gt;Average     1.12  973.43&lt;BR /&gt;&lt;BR /&gt;Due to the limited space in question window, pls find a complet sar log attached.Pls note that the io of all disks is very good.This is a typical picture:&lt;BR /&gt;&lt;BR /&gt;Average   c13t4d1   24.92    0.50      36     632    5.00    8.98&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Can anyone help me to find out why wio% is so high while the io of disks is good, the &lt;BR /&gt;%rcache is 100%, the %wcache is 84% ??? &lt;BR /&gt;&lt;BR /&gt;Thank you in advance. Do i have to provide any additional log , for exmaple, vmstat, iostat, top?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2003 15:17:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881603#M101051</guid>
      <dc:creator>yang haijun</dc:creator>
      <dc:date>2003-01-14T15:17:49Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881604#M101052</link>
      <description>Yang,&lt;BR /&gt;&lt;BR /&gt;can you attach that again, but this time as a text file?&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry</description>
      <pubDate>Tue, 14 Jan 2003 15:37:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881604#M101052</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2003-01-14T15:37:48Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881605#M101053</link>
      <description>Yes, a wio% of 55 is absolutely terrible. You are completely i/o bound. As a guide;&lt;BR /&gt;&lt;BR /&gt;&amp;lt;5 is perfect&lt;BR /&gt;5-20 is busy but not excessive&lt;BR /&gt;&amp;gt;20 is completely i/o bound&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2003 15:40:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881605#M101053</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-01-14T15:40:40Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881606#M101054</link>
      <description>Yang,&lt;BR /&gt;&lt;BR /&gt;Can't open the zip archive, it says it is corrupt.&lt;BR /&gt;&lt;BR /&gt;Do you have Glance on your Server? I believe there is a free trial version on the Application CDs (probably CD # 2 or # 1). If you install this, you can examine the processes running to find out which ones are waiting for I/O.&lt;BR /&gt;&lt;BR /&gt;Without Glance, I would suggest the following,...&lt;BR /&gt;&lt;BR /&gt;'vmstat' - will show count of processes blocked for resources, and activity on the memory (pages freed and allocation).&lt;BR /&gt;&lt;BR /&gt;How big is swap and how full is it (swapinfo -ta)? If you see deactivations on vmstat, you are running out of memory.&lt;BR /&gt;&lt;BR /&gt;Buffer Cache should be no larger than 400M (rule of thumb), but if you have some spare memory, bump it up by 100M or so and see what happens.&lt;BR /&gt;&lt;BR /&gt;Check top and see how much time the system spends in system mode - is the User Mode chewing up the Wait IO or is it the OS managing itself that causes the wait.&lt;BR /&gt;&lt;BR /&gt;This is a big area to examine - check out previous posts on performance, and look at training or a good book (HP-UX Performance Tuning by Weygant and Saurs is a fantastic reference).&lt;BR /&gt;&lt;BR /&gt;Best of luck Ian</description>
      <pubDate>Tue, 14 Jan 2003 15:42:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881606#M101054</guid>
      <dc:creator>Ian Dennison_1</dc:creator>
      <dc:date>2003-01-14T15:42:37Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881607#M101055</link>
      <description>The text file will be 2.80M.&lt;BR /&gt;I will upload the last part of sar, average numbers.&lt;BR /&gt;Thank you.</description>
      <pubDate>Tue, 14 Jan 2003 15:43:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881607#M101055</guid>
      <dc:creator>yang haijun</dc:creator>
      <dc:date>2003-01-14T15:43:09Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881608#M101056</link>
      <description>This could be caused by having too many disks on each controller. What is the output from;&lt;BR /&gt;ioscan -fknCdisk&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2003 15:43:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881608#M101056</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-01-14T15:43:23Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881609#M101057</link>
      <description>Yes, you have far too many disks (all c13's and c14's) one each controller and theyre all busy in varying degrees which means your poor controller is flooded. You need to add more i/o (scis/fibre) controllers.&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2003 15:49:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881609#M101057</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-01-14T15:49:17Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881610#M101058</link>
      <description>&lt;BR /&gt;What's going on with these disk devices??&lt;BR /&gt;&lt;BR /&gt;Average   c13t5d0   14.80 32767.50      33     549    5.00    5.81&lt;BR /&gt;Average   c13t5d1   12.33 32767.50      31     555    4.99    4.72&lt;BR /&gt;&lt;BR /&gt;Average   c14t4d4   12.63 32767.50      31     554    5.00    5.03&lt;BR /&gt;Average   c14t4d5   13.38 32767.50      31     558    4.95    5.38&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;What OS release are you running and what is the latest patch bundle you have installed?&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry</description>
      <pubDate>Tue, 14 Jan 2003 15:51:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881610#M101058</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2003-01-14T15:51:00Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881611#M101059</link>
      <description>Harry:&lt;BR /&gt; I don't know why there are such high queue length for the 4 disks.I will figure out the application running on them.Do you think it is the 4 devices causing too high wio%?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2003 16:45:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881611#M101059</guid>
      <dc:creator>yang haijun</dc:creator>
      <dc:date>2003-01-14T16:45:37Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881612#M101060</link>
      <description>stefan:&lt;BR /&gt;   Thank u for u help.&lt;BR /&gt; I found there are 4 HBA's in host hbsd1.But &lt;BR /&gt;C13,C14,C17,C18 are actually two HBA's, they are bound to 3aa,14aa, and see about 144 devices. &lt;BR /&gt;And C21,C23 are another two HBA's, and they are bound to 3BA,14ba, and they see about 40 devices.&lt;BR /&gt;Pls confirm these tommorrow.&lt;BR /&gt;&lt;BR /&gt;For i am not onsite, i have to rely on my onsite colleague to get necessary logs.Sorry for my being unable to provide quickly&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2003 17:10:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881612#M101060</guid>
      <dc:creator>yang haijun</dc:creator>
      <dc:date>2003-01-14T17:10:58Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881613#M101061</link>
      <description>Harry:&lt;BR /&gt;   OS level: B.11.11.&lt;BR /&gt;   model:9000/800/SD16000&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;about the patch level, can you find it in my attachment?&lt;BR /&gt;The zip file is all that i can give, which is produced by a script, named emcgrab.Yes, we are dealing with an EMC Symmetrix connecting to HP superdome.&lt;BR /&gt;&lt;BR /&gt;From the zip file, you can see many, many stuffs, but definitely i have to get more from customer.&lt;BR /&gt;</description>
      <pubDate>Tue, 14 Jan 2003 17:19:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881613#M101061</guid>
      <dc:creator>yang haijun</dc:creator>
      <dc:date>2003-01-14T17:19:36Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881614#M101062</link>
      <description>1 Meg limit on attachements here, thats why the zip archive is corrupt.&lt;BR /&gt;&lt;BR /&gt;Just thought I'd point that out.</description>
      <pubDate>Tue, 14 Jan 2003 18:11:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881614#M101062</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-01-14T18:11:46Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881615#M101063</link>
      <description>&lt;BR /&gt;If its running on EMC the performance should be ok - it seems you have uneven performance - some devices are running very heavy compares to others. &lt;BR /&gt;&lt;BR /&gt;Are you using EMC Optimizer ? this is like a load balancer - it monitors all devices on the EMC and then moves the data around to even the load across all devcies. Once this is done your stats on the HP will show a nice even (low) disk usage across all devices and performance will be a lot lot better and your wio% should drop a lot. As a guide - our EMC devices on HP servers with EMC Optimzer configured all run with a wio% of 5 or less - even with heavy i/o.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Jan 2003 11:04:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881615#M101063</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-01-15T11:04:16Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881616#M101064</link>
      <description>EMC should be running better than that.&lt;BR /&gt;&lt;BR /&gt;How many disc adapters do you have?&lt;BR /&gt;&lt;BR /&gt;Most EMC are also set for the 2nd outstanding write to wait on the 1st.  With many writes, this can cause a lot of wait.  Most microcode levels allow this to be set higher.&lt;BR /&gt;&lt;BR /&gt;Curious how much cache is in the EMC?  This may need to be higher.</description>
      <pubDate>Wed, 15 Jan 2003 13:39:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881616#M101064</guid>
      <dc:creator>John Bolene</dc:creator>
      <dc:date>2003-01-15T13:39:38Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881617#M101065</link>
      <description>The cache in Symmetrix is 16GB, big enough, I think. &lt;BR /&gt;&lt;BR /&gt;On symmetrix side, there are totally 4 fibre channel ports for the superdome, but we can see that only c13,c14 is in use, i can see from our log(sorry that i am unable to upload to size limit) that 2 FA ports are used by HP superdome.&lt;BR /&gt;I do not think channel throughput is a problem, since we are using 2GB adapters.&lt;BR /&gt;&lt;BR /&gt;I have seen some docuements saying that %wio is another kind of idle,which can indicate either that IO is too sluggish  or CPU is too fast.I can say that the IO for each disk is very good, justified by very nice number of avwait and avserv.&lt;BR /&gt;&lt;BR /&gt;If the customer adds one or more Oracle instance on the server, i think wio% can go down due to continual IO feeding by parallel process/threads. It that correct?&lt;BR /&gt;&lt;BR /&gt;Another thing I wish to clarify is the extremely high avque(about 32000) for the 4 devices as mentioned by Harry. It is such a weird number. And I have examined LVM layout, the 4 disks are no more than different from others, and LVs are created in host striping manner, the 4 devices are with many volume residing on them.I am really puzzled by this monster like number.&lt;BR /&gt;&lt;BR /&gt;Pls cast some light on this, especially whether the high %wio is a bad thing or an usual thing?&lt;BR /&gt;&lt;BR /&gt;Thank you.</description>
      <pubDate>Wed, 15 Jan 2003 13:54:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881617#M101065</guid>
      <dc:creator>yang haijun</dc:creator>
      <dc:date>2003-01-15T13:54:42Z</dc:date>
    </item>
    <item>
      <title>Re: any problem with the high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881618#M101066</link>
      <description>Have a look at Bill Hassel's reply in a very similar question here;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x3da54a988422d711abdc0090277a778c,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x3da54a988422d711abdc0090277a778c,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;With a wio% that high its definitely an i/o issue, the question is what to do to isolate or fix it.&lt;BR /&gt;&lt;BR /&gt;With 2Gb connections to the EMC theyre not going to be the problem, the problem is either the EMC is having trouble keeping up with the large number of i/o requests - and EMC can check this for you and tell if you if certain physical disks in the Symmetrix are thrashing (Optimizer can report and fix it), or, in my opinion, you have too many i/o requests going to particlular EMC luns (/dev/dsk/.. entries).&lt;BR /&gt;&lt;BR /&gt;Certainly something to try is stripe your lvols across all available channels and devices to even out the io load - this should increase throughput considerably - unless the problem is at the EMC end. You need to investigate both possibilities.&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Jan 2003 14:19:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-problem-with-the-high-wio/m-p/2881618#M101066</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-01-15T14:19:34Z</dc:date>
    </item>
  </channel>
</rss>

