<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: sar -u - high wio% in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937667#M112982</link>
    <description>Here is a data collection script that might help.&lt;BR /&gt;&lt;BR /&gt;Attached.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Thu, 27 Mar 2003 21:40:48 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2003-03-27T21:40:48Z</dc:date>
    <item>
      <title>sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937665#M112980</link>
      <description>We have V-2200(16 GB RAM and 10 CPU) used as a database server with  3 fiber cards . We use EMC disks and also have EMC powerpath . For the last few days we have been getting complaints on slow performance. There are 2 things whch bother me . &lt;BR /&gt;- sar -b - % write cache remain very low - avg 30-40 only (buffer cache = 1 gig)&lt;BR /&gt;&lt;BR /&gt;- sar -u - the avgv %wio remain as high as 30 . &lt;BR /&gt;&lt;BR /&gt;what should i look exactly in glance to help me find the bottleneck ? &lt;BR /&gt;&lt;BR /&gt;Important - how do interprate the glance result and find the reason of bottleneck . &lt;BR /&gt;&lt;BR /&gt;I know this question is asked many times and i already gone through  few of the old archives. But may be over a period of time somebody find a better way to diag this type of issue.&lt;BR /&gt;&lt;BR /&gt;Any help appreciated .</description>
      <pubDate>Thu, 27 Mar 2003 21:31:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937665#M112980</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-27T21:31:42Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937666#M112981</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Have you checked your powerpath to make sure there are no problems?  Try 'powermt display', I think that is the command that will show you the paths.  &lt;BR /&gt;&lt;BR /&gt;Also, has anything changed on the box since it started running slow?  Been rebooted or patched?  Any runaway processes?  We've seen similar problems when our Oracle DBAs were running an analyze program that went nuts.&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Thu, 27 Mar 2003 21:36:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937666#M112981</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2003-03-27T21:36:25Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937667#M112982</link>
      <description>Here is a data collection script that might help.&lt;BR /&gt;&lt;BR /&gt;Attached.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 27 Mar 2003 21:40:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937667#M112982</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-03-27T21:40:48Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937668#M112983</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Can you also post your sar -d 2 5&lt;BR /&gt;You will get the response time and average wait time here along with utilization.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Thu, 27 Mar 2003 21:40:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937668#M112983</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2003-03-27T21:40:53Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937669#M112984</link>
      <description>This kind of sounds like a primary/alternate path issue to the EMC frame. I think maybe sar -d would uncover a problem with either the primary or alternate path to the EMC attached drives.&lt;BR /&gt;&lt;BR /&gt;As far as glance goes, you could use "s" to select one of the top oracle processes and look for a lot of voluntary context switches and very little I/O, with very few involuntary switches. But that would only be a hint as to the problem. Check the second to the bottom line as it tells the reason the pid is waiting. i.e. I/O, streams, etc.</description>
      <pubDate>Thu, 27 Mar 2003 21:45:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937669#M112984</guid>
      <dc:creator>John Dvorchak</dc:creator>
      <dc:date>2003-03-27T21:45:34Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937670#M112985</link>
      <description>just remember i increased the nproc from 16000 to 20000 last week. I have read it some where that  setting its value is not recommended. &lt;BR /&gt;&lt;BR /&gt;I already checked the powermt watch command and maximum q-IO's i see is 1or 2 under q-iO's plus all paths are optimal. &lt;BR /&gt;&lt;BR /&gt;Also attached the sar -d 2 5 output .</description>
      <pubDate>Thu, 27 Mar 2003 21:50:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937670#M112985</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-27T21:50:35Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937671#M112986</link>
      <description>Did you change any other kernel parameters when you increased nproc?   What are your mount options for the filesystems [mount -v]?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Thu, 27 Mar 2003 22:02:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937671#M112986</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2003-03-27T22:02:04Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937672#M112987</link>
      <description>sorry , it is nfile not nproc .</description>
      <pubDate>Thu, 27 Mar 2003 22:13:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937672#M112987</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-27T22:13:58Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937673#M112988</link>
      <description>attached is the mount -v output. We use online JFS mount options . &lt;BR /&gt;</description>
      <pubDate>Thu, 27 Mar 2003 22:16:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937673#M112988</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-27T22:16:51Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937674#M112989</link>
      <description>i think before  my last reboot i changed the /etc/fstab file added online JFS mount option and then rebooted the machine .</description>
      <pubDate>Thu, 27 Mar 2003 22:32:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937674#M112989</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-27T22:32:53Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937675#M112990</link>
      <description>Hi Deepak,&lt;BR /&gt;&lt;BR /&gt;Sorry for the delayed reply.&lt;BR /&gt;&lt;BR /&gt;Looking at your sar -d, your disks are looking perfectly normal. I neither see utilization high nor abnormal response times.&lt;BR /&gt;&lt;BR /&gt;I would rule out the possibility of disk subsystem here.&lt;BR /&gt;&lt;BR /&gt;It is interesting to see that you enabled onlineJFS options for almost all the filesystems. While it is a general perception that these options help, they are to be used only case by case basis. Some applications benefit from the cache and some do not.&lt;BR /&gt;&lt;BR /&gt;I don't know if your application likes to bypass the buffer cache. The lower write cache hit and increased %wio may be corresponding to your bypassing buffer cache.&lt;BR /&gt;&lt;BR /&gt;Try to remount the filesystem without these options (mincache and convosync) and see if it helps. You can use 'remount' option of mount to do it online without having to unmount the filesystems.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Thu, 27 Mar 2003 22:56:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937675#M112990</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2003-03-27T22:56:27Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937676#M112991</link>
      <description>thanx for the reply . looks a good idea to try re-mounting the file system without the option . If i change my /etc/fstab file and use mount -a . does it means it remount it in normal way . &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 27 Mar 2003 23:16:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937676#M112991</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-27T23:16:16Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937677#M112992</link>
      <description>Hi Deepak,&lt;BR /&gt;&lt;BR /&gt;mount -F vxfs -o delaylog,remount /mount_point &lt;BR /&gt;&lt;BR /&gt;should reset it back.&lt;BR /&gt;&lt;BR /&gt;You can enable these options back using the same remount option.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Thu, 27 Mar 2003 23:27:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937677#M112992</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2003-03-27T23:27:45Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937678#M112993</link>
      <description>thanx . i re-mounted them. i check with mount -v command . let me see how it goes tomorrow. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 27 Mar 2003 23:44:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937678#M112993</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-27T23:44:07Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937679#M112994</link>
      <description>one more thing , i used on-line JFS option only for oracle databases and keeping in mind only for datafiles and index , nothing for archive and redo logs . &lt;BR /&gt;any way , let see how it works with those option taken out .</description>
      <pubDate>Fri, 28 Mar 2003 00:09:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937679#M112994</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-28T00:09:27Z</dc:date>
    </item>
    <item>
      <title>Re: sar -u - high wio%</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937680#M112995</link>
      <description>So far looks good . I havn't heard any compliants from users today . It will be good investigation for me to find why online JFS mount option impacted one database and other remain fine. &lt;BR /&gt;Just one last question . When we look at sar -u output , do we need to get worried if we see high wio%. in my case it remains around 30% . Second , sar -d output , should i compare avg service time v/s avg wait time . Should i need to be worried if i see avg wait time more than avg serv time. &lt;BR /&gt;Sridhar - u have seen my sar -d output  , few of the disks were showing avg wait time more than serv time . Is this means i have some kind of bottleneck . &lt;BR /&gt;&lt;BR /&gt;Any pointers.</description>
      <pubDate>Fri, 28 Mar 2003 17:54:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sar-u-high-wio/m-p/2937680#M112995</guid>
      <dc:creator>Deepak Seth_1</dc:creator>
      <dc:date>2003-03-28T17:54:51Z</dc:date>
    </item>
  </channel>
</rss>

