<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disk I/O performance using gpm in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559802#M371571</link>
    <description>Hi&lt;BR /&gt;&lt;BR /&gt;I don't use gpm unless I'm instructing.  I prefer sar -d to search for a disk bottlenect.  It's very easy.  When 'avwait' &amp;gt; 'avserv' = disk bottlenect.&lt;BR /&gt;&lt;BR /&gt;$Wio is also useful.  It should be 70 to 90%.  If it too low, like 30%, then the dba's need to reindex their database.</description>
    <pubDate>Thu, 07 Jan 2010 01:39:30 GMT</pubDate>
    <dc:creator>Michael Steele_2</dc:creator>
    <dc:date>2010-01-07T01:39:30Z</dc:date>
    <item>
      <title>Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559801#M371570</link>
      <description>I'm trying to identify if disk I/O is a bottleneck on an HP-UX 11.23 system connected to a SAN with two-2 Gb fiber channel interface cards.&lt;BR /&gt;I'm using GlancePlus gpm utility and getting charts from Global Disk Summary and Disk Throughput.&lt;BR /&gt;On the first chart, is the Peak Disk Utilization graph helping somehow? These are peak values only.&lt;BR /&gt;And if so, what are the best practices on this? How to know if a value is ok?&lt;BR /&gt;The second chart provides individual disk throughput graphs. What are the best practices so this can be understood clearly.&lt;BR /&gt;Also, I've read in this Forum some people saying GlancePlus is old and it won't report correctly disk performance on a SAN. If this were the case, what are other options and how to use them?&lt;BR /&gt;Thank you,&lt;BR /&gt;&lt;BR /&gt;Fidel</description>
      <pubDate>Thu, 07 Jan 2010 01:20:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559801#M371570</guid>
      <dc:creator>Fidel Ramirez_1</dc:creator>
      <dc:date>2010-01-07T01:20:10Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559802#M371571</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I don't use gpm unless I'm instructing.  I prefer sar -d to search for a disk bottlenect.  It's very easy.  When 'avwait' &amp;gt; 'avserv' = disk bottlenect.&lt;BR /&gt;&lt;BR /&gt;$Wio is also useful.  It should be 70 to 90%.  If it too low, like 30%, then the dba's need to reindex their database.</description>
      <pubDate>Thu, 07 Jan 2010 01:39:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559802#M371571</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2010-01-07T01:39:30Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559803#M371572</link>
      <description>&amp;gt;&amp;gt; GlancePlus is old and it won't report correctly disk performance on a SAN.&lt;BR /&gt;&lt;BR /&gt;no thats not the case - you just have to know what the stats are telling you...&lt;BR /&gt;&lt;BR /&gt;in glance "disk utilization" (and %busy in sar -d) simply tells you what percentage of time during a measurement interval a particular disk or LUN was actually servicing some sort of IO - now it doesn't take into account what that disk is actually capable of doing in terms of IO, simply how much time it is busy doing something. Now obviously you could see 2 disks on a system, one of which is an old 10K RPM 36GB physical disk, and another which is a LUN presented from a disk array which actually might be made up of dozens of 15K RPM disks fronted by a load of cache. To sar or glance, these devices look the same - they are both devices of type sdisk - but if they both show close to 100% utilization/%busy what does that mean? Well it might mean a lot to the physical disk, but probably isn't very important for the SAN LUN. Of course there are other stats in both  glance and sar which are much more relevant, but the sar values Michael has pointed you at are a good starting point.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Thu, 07 Jan 2010 09:32:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559803#M371572</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2010-01-07T09:32:48Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559804#M371573</link>
      <description>Thank you both Michael and Duncan for your response. Duncan, I will like to know what are those other relevant stats. My idea is to have a general and detail picture of disk performance using GPM. What are the best practices for that or what are simple rule of thumbs to have a clear picture.&lt;BR /&gt;Please let me know if you guys can see the points assigned for these responses because on my response I don't see them.&lt;BR /&gt;Thank you again.</description>
      <pubDate>Thu, 07 Jan 2010 16:59:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559804#M371573</guid>
      <dc:creator>Fidel Ramirez_1</dc:creator>
      <dc:date>2010-01-07T16:59:46Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559805#M371574</link>
      <description>Fidel,&lt;BR /&gt;&lt;BR /&gt;Don't see any points assigned...&lt;BR /&gt;&lt;BR /&gt;I don't use gpm, but I do use glance, and for disks generally look at the IO by Disk screen (press 'u' in glance or just launch glance with "glance -u")&lt;BR /&gt;&lt;BR /&gt;In there the _real_ interesting metric is the last column (Serv Time) - this is the same as "avserv" from sar -d. You want to see this below 10ms for a well performing system, and anything over 20ms is usually a cause for concern.&lt;BR /&gt;&lt;BR /&gt;In addition the Phys IO column is pretty interesting as this tells us how many IOs per second the LUN is handling - a physical disk spindle is typically capable of handling over 100 IOs per second, and most stuggle with over 200 IOs pers second, but that's really in _very_ broad hand waving terms as it can depend on many factors associated with data location/disk geometry and IO sizes etc.&lt;BR /&gt;&lt;BR /&gt;More than that is difficult to say from a "generalities" perspective. As with all performance measuring, the thing to do is have a picture of what the system looks like when performance is good, and then compare that to when you have a performance issue.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Thu, 07 Jan 2010 18:08:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559805#M371574</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2010-01-07T18:08:45Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559806#M371575</link>
      <description>Thank you very much Duncan for your precise comments on disk I/O.&lt;BR /&gt;I assigned 10 points to both of you. In case they don't show up, please let me know to correct it.&lt;BR /&gt;Thank you very much again.</description>
      <pubDate>Thu, 07 Jan 2010 18:18:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559806#M371575</guid>
      <dc:creator>Fidel Ramirez_1</dc:creator>
      <dc:date>2010-01-07T18:18:38Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559807#M371576</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;Glance will let you identify the disks with high I/O.&lt;BR /&gt;&lt;BR /&gt;Then you need to take that information, along with ioscan output to identify the LUN to the SAN administrator in order to get the SAN administrator to look for trouble on the SAN.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 07 Jan 2010 18:31:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559807#M371576</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2010-01-07T18:31:35Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559808#M371577</link>
      <description>Thank you for your response Steven.&lt;BR /&gt;&lt;BR /&gt;Fidel</description>
      <pubDate>Thu, 07 Jan 2010 18:49:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559808#M371577</guid>
      <dc:creator>Fidel Ramirez_1</dc:creator>
      <dc:date>2010-01-07T18:49:31Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559809#M371578</link>
      <description>You want disk utilization to be less than 50% and you want the queue to be short.</description>
      <pubDate>Fri, 08 Jan 2010 00:57:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559809#M371578</guid>
      <dc:creator>Emil Velez</dc:creator>
      <dc:date>2010-01-08T00:57:21Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559810#M371579</link>
      <description>Hi Emil,&lt;BR /&gt;&lt;BR /&gt;I've seen that requirement before but how will you compare a fast disk on a SAN with an old disk drive?&lt;BR /&gt;Are they both weighted with the same rule?&lt;BR /&gt;Thank you. Fidel&lt;BR /&gt;</description>
      <pubDate>Fri, 08 Jan 2010 01:06:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559810#M371579</guid>
      <dc:creator>Fidel Ramirez_1</dc:creator>
      <dc:date>2010-01-08T01:06:02Z</dc:date>
    </item>
    <item>
      <title>Re: Disk I/O performance using gpm</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559811#M371580</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;disks performance is measured in 'seek time'.  A metric provided by every manufacturer.  All you need is the part number then google for it with 'seek'.</description>
      <pubDate>Fri, 08 Jan 2010 01:28:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-i-o-performance-using-gpm/m-p/4559811#M371580</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2010-01-08T01:28:44Z</dc:date>
    </item>
  </channel>
</rss>

