<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How do I get FC60 I/O stats in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809911#M6104</link>
    <description>I have glance and measureware on the server.  I can get bydisk metrics but that shows i/o rate not percentages.  So I have stats but I have no idea what the max i/o for a particular disk is.&lt;BR /&gt;&lt;BR /&gt;I'm not that familiar with glance, can that give me percentages for a particular i/o card, and can it drill down to the disks and show percentages for each disk?</description>
    <pubDate>Fri, 20 Sep 2002 17:22:26 GMT</pubDate>
    <dc:creator>Sean OB_1</dc:creator>
    <dc:date>2002-09-20T17:22:26Z</dc:date>
    <item>
      <title>How do I get FC60 I/O stats</title>
      <link>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809909#M6102</link>
      <description>Hello.&lt;BR /&gt;&lt;BR /&gt;I have an L2000 running 11.00 connected to an FC60 array.&lt;BR /&gt;&lt;BR /&gt;There are two luns connected to the L2000 over 4 fibre channels (2 primary, 2 alternate).  Each lun has 6 18gb disks in a raid 5 group.  &lt;BR /&gt;&lt;BR /&gt;Right now sar -d shows the disks at 96% usage. &lt;BR /&gt;&lt;BR /&gt;Is there a way I can determine if it is the channels that are bottlenecked, or the spindles themselves?&lt;BR /&gt;&lt;BR /&gt;TIA,&lt;BR /&gt;&lt;BR /&gt;Sean</description>
      <pubDate>Thu, 19 Sep 2002 16:52:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809909#M6102</guid>
      <dc:creator>Sean OB_1</dc:creator>
      <dc:date>2002-09-19T16:52:26Z</dc:date>
    </item>
    <item>
      <title>Re: How do I get FC60 I/O stats</title>
      <link>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809910#M6103</link>
      <description>install Glance Plus to see real-time statistics...&lt;BR /&gt;install measureware to collect an historic data&lt;BR /&gt;this can be very useful to you.... to see graphs use perf view</description>
      <pubDate>Fri, 20 Sep 2002 14:34:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809910#M6103</guid>
      <dc:creator>Joaquin Gil de Vergara</dc:creator>
      <dc:date>2002-09-20T14:34:22Z</dc:date>
    </item>
    <item>
      <title>Re: How do I get FC60 I/O stats</title>
      <link>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809911#M6104</link>
      <description>I have glance and measureware on the server.  I can get bydisk metrics but that shows i/o rate not percentages.  So I have stats but I have no idea what the max i/o for a particular disk is.&lt;BR /&gt;&lt;BR /&gt;I'm not that familiar with glance, can that give me percentages for a particular i/o card, and can it drill down to the disks and show percentages for each disk?</description>
      <pubDate>Fri, 20 Sep 2002 17:22:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809911#M6104</guid>
      <dc:creator>Sean OB_1</dc:creator>
      <dc:date>2002-09-20T17:22:26Z</dc:date>
    </item>
    <item>
      <title>Re: How do I get FC60 I/O stats</title>
      <link>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809912#M6105</link>
      <description>Have you tried any of the FC 60 arrray manager commands .&lt;BR /&gt;They should all be in /opt/hparray/bin .&lt;BR /&gt;&lt;BR /&gt;I am no where near my array or else I could have dug in some more .</description>
      <pubDate>Fri, 20 Sep 2002 23:30:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809912#M6105</guid>
      <dc:creator>Ashwani Kashyap</dc:creator>
      <dc:date>2002-09-20T23:30:57Z</dc:date>
    </item>
    <item>
      <title>Re: How do I get FC60 I/O stats</title>
      <link>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809913#M6106</link>
      <description>Sean:&lt;BR /&gt;&lt;BR /&gt;You may have to do your own benchmarking.  I/O maximums vary due to many factors, I think that Glance will only show rate, not percentage, since percentage requires a max figure for the 100% mark, and that is indeterminate.&lt;BR /&gt;&lt;BR /&gt;To do some rough benchmarking on your system, you need to find a time when the system is quiescent, and do solid I/O to one LUN or volume group.  Use 'dd' to move data at max rate&lt;BR /&gt;&lt;BR /&gt;dd -if /dev/dsk/c?t?d? -of /dev/null -bs 8192   (for instance; block size should equal LUN stripe size,  Be careful with dd -- get it backward and you wipe your LUN clean)&lt;BR /&gt;&lt;BR /&gt;Watch Glance while this is going on, or use MWA to capture data every 10 seconds, etc.  This is not a good "real world" figure, LVM cannot act like 'dd', but it is a good idea for the max I/O the channel and LUN can sustain.&lt;BR /&gt;&lt;BR /&gt;Other things to try, for each LUN, to set rough benchmarks, would be:&lt;BR /&gt;1&amp;gt; Copy a large file from each of the file systems on the array to /dev/null&lt;BR /&gt;2&amp;gt; Copy a large file from a root volume (/tmp) to each FS on your array, and also copy them back (testing both read and write speeds).&lt;BR /&gt;3&amp;gt; Copy a large file from one array FS to another array FS, and back.  This runs the array I/O channels, both directions, at max (well, LVM max).  &lt;BR /&gt;4&amp;gt; If you can do this before LVM is set up on the array, you can do all these things with 'dd' as well, reading and writing at max channel and array speeds.  This is as close to the 100% figure for your system as you can get.  Most people can't do 'dd' this way, since they have LVM running all the LUNs...&lt;BR /&gt;&lt;BR /&gt;One other thing to think about: the FC60 supports up to 31 LUNs, which is the smallest disk set that HP-UX can see.  Many people just add LUNs to VGs and run, which is all right, but with, say, two LUNs in a VG, you will fill one LUN before extents in the other LUN are used at all.  You can improve performance by having multiple LUNs in a VG, and then striping across each LUN, so all get used in round robin fashion.  &lt;BR /&gt;&lt;BR /&gt;This is not a novel concept, just one that is hard to do after the fact, and a lot of folks don't think about it up front.  I have seen startling performance improvements by adding striping across three LUNs in a VG, rather than just one at a time.&lt;BR /&gt;&lt;BR /&gt;Good luck, and be careful doing this, 'dd' makes a wonderful LVM erradicator.  "Measure twice, cut once" works for computers too.&lt;BR /&gt;&lt;BR /&gt;Regards, --bmr</description>
      <pubDate>Wed, 25 Sep 2002 14:43:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/how-do-i-get-fc60-i-o-stats/m-p/2809913#M6106</guid>
      <dc:creator>Brian M Rawlings</dc:creator>
      <dc:date>2002-09-25T14:43:29Z</dc:date>
    </item>
  </channel>
</rss>

