<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disk IO Performance issue ? in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839463#M90860</link>
    <description>I agree with others who point out that Unix based utilities can be unreliable about performance on disk arrays/SANs, etc. &lt;BR /&gt;&lt;BR /&gt;EMC has their own monitoring tool for monitoring their Symmetrix system performance. I would recommend using this tool. It easily points out "hot" spots on the EMC disks.&lt;BR /&gt;&lt;BR /&gt;EMC also has an OVO SPI to their product that will alllow you to automatically send performance alerts to OVO. You can set thresholds to send alerts before problems occur. Nice feature.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;Marty</description>
    <pubDate>Tue, 05 Nov 2002 21:18:45 GMT</pubDate>
    <dc:creator>Martin Johnson</dc:creator>
    <dc:date>2002-11-05T21:18:45Z</dc:date>
    <item>
      <title>Disk IO Performance issue ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839459#M90856</link>
      <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;I have 5 servers in cluster mode (using MC/SG) connected to a SAN (EMC Symmetrix through Brocade switches). The cluster is built with 3 rp7400 and 2 rp7410.&lt;BR /&gt;&lt;BR /&gt;At some point users complaint about performance and when running the sar -d command, I'm getting very HUGE numbers. I'm just wondering if these numbers make any sense or if the sar command show me something wrong !&lt;BR /&gt;&lt;BR /&gt;If I can trust sar output, looks like I'm in trouble !&lt;BR /&gt; &lt;BR /&gt;bash-2.05# sar -d 10 4&lt;BR /&gt;&lt;SEE attachement=""&gt;&lt;BR /&gt;&lt;/SEE&gt;</description>
      <pubDate>Tue, 05 Nov 2002 20:37:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839459#M90856</guid>
      <dc:creator>Eric Theroux</dc:creator>
      <dc:date>2002-11-05T20:37:36Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO Performance issue ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839460#M90857</link>
      <description>You can't trust sar and you can't even trust high-end performance tools like Glance for this. I should say that you can trust them but you have to understand what you are seeing. All the the host-based tools can know is that an awful lot of I/O is going through what they think is one disk. They have know way of knowing that this is really a high-end disk array.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Nov 2002 20:46:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839460#M90857</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2002-11-05T20:46:47Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO Performance issue ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839461#M90858</link>
      <description>Your data looks not too bad, here are some thought of mine:&lt;BR /&gt;&lt;BR /&gt;1. sar -u &lt;BR /&gt;%idle low? This is the percentage of time that the cpu is not running processes, yes, possibly it is IO bottleneck.&lt;BR /&gt;&lt;BR /&gt;%usr high? Many systems normally operate with 80% of the cpu time spent as user time, and 20% spent as system time. No, possibly it is IO bottleneck.&lt;BR /&gt;&lt;BR /&gt;%wio &amp;gt; 15? Yes, possibly disk IO problem.&lt;BR /&gt;&lt;BR /&gt;2. sar -d &lt;BR /&gt;%busy &amp;gt;50? Yes, you may have IO bottleneck on disk, check which disk having problem. &lt;BR /&gt;&lt;BR /&gt;Since your %busy is not too high, you may need to check your network, use vmstat. &lt;BR /&gt;&lt;BR /&gt;You also can use iostat or glance to help you to determine. Good luck.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Nov 2002 20:59:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839461#M90858</guid>
      <dc:creator>Victor_5</dc:creator>
      <dc:date>2002-11-05T20:59:52Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO Performance issue ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839462#M90859</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Those 'avque' numbers look too big.  There is a patch for 'sar' for 11.11 that addresses that problem.  Here is the details:&lt;BR /&gt;&lt;BR /&gt;Symptoms:&lt;BR /&gt; PHKL_27200:&lt;BR /&gt; ( SR:8606249217 CR:JAGae15611 )&lt;BR /&gt;&lt;BR /&gt; "sar -d" reports incorrect values for avque and avwait.&lt;BR /&gt;&lt;BR /&gt; Example output:&lt;BR /&gt; device %busy  avque    r+w/s blks/s avwait        avserv&lt;BR /&gt; c17t1d1 66.00 60178.29 284   27206  2124620672.00 0.00&lt;BR /&gt; c25t1d1 67.00 32767.50 296   28525  4.91          2.91&lt;BR /&gt; c33t1d1 69.00 65531.50 294   28413  4.99          2.87&lt;BR /&gt; c41t1d1 67.60 65534.50 316   29669  5.04          2.68&lt;BR /&gt; c49t1d1 67.80 60426.86 310   29845  1295032832.00 0.00&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Nov 2002 21:02:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839462#M90859</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-11-05T21:02:53Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO Performance issue ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839463#M90860</link>
      <description>I agree with others who point out that Unix based utilities can be unreliable about performance on disk arrays/SANs, etc. &lt;BR /&gt;&lt;BR /&gt;EMC has their own monitoring tool for monitoring their Symmetrix system performance. I would recommend using this tool. It easily points out "hot" spots on the EMC disks.&lt;BR /&gt;&lt;BR /&gt;EMC also has an OVO SPI to their product that will alllow you to automatically send performance alerts to OVO. You can set thresholds to send alerts before problems occur. Nice feature.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;Marty</description>
      <pubDate>Tue, 05 Nov 2002 21:18:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839463#M90860</guid>
      <dc:creator>Martin Johnson</dc:creator>
      <dc:date>2002-11-05T21:18:45Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO Performance issue ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839464#M90861</link>
      <description>Thanks a lot to everybody.&lt;BR /&gt;&lt;BR /&gt;The first thing I suspect is the patch against sar -d who doesn't show reliable data, that matches exactly to the symptoms I've noticed.&lt;BR /&gt; &lt;BR /&gt;But I also agree with who point to the fact that I should never fully trust those tools blankly. I'll try to collect data using EMC tools instead, but I'm not very familiar with those one.&lt;BR /&gt;&lt;BR /&gt;thanks again folk,&lt;BR /&gt;Eric</description>
      <pubDate>Tue, 05 Nov 2002 21:46:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-io-performance-issue/m-p/2839464#M90861</guid>
      <dc:creator>Eric Theroux</dc:creator>
      <dc:date>2002-11-05T21:46:08Z</dc:date>
    </item>
  </channel>
</rss>

