<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: throughput question in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715593#M385398</link>
    <description>Hi Charles,&lt;BR /&gt;&lt;BR /&gt;&amp;gt; HBA 4gb.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; created R5 lun and presented it to rx6600,&lt;BR /&gt;&amp;gt; created vg and lvol (no striping).&lt;BR /&gt;&lt;BR /&gt;&amp;gt; tried to load up the filesystem with dd&lt;BR /&gt;&amp;gt; copies and max throughput seems to be around&lt;BR /&gt;&amp;gt; 380mbps on the dd's both reads and writes.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; What should I be seeing with this type of&lt;BR /&gt;&amp;gt; test?&lt;BR /&gt;You mention a HBA speed of 4gbit.&lt;BR /&gt;&lt;BR /&gt;If only 1 fc hba is connected then offcourse, the 380MByte/sec is the maximum performance that can be attained, as 4gbit/10 = max performance = 400Mbyte/sec.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; also, as an aside If I run a long-running&lt;BR /&gt;&amp;gt; db query I never see the blks/s get above&lt;BR /&gt;&amp;gt; 7710 &lt;BR /&gt;&lt;BR /&gt;&amp;gt; 11:02:29 device %busy avque r+w/s blks/s&lt;BR /&gt;&amp;gt; avwait avserv&lt;BR /&gt;&amp;gt; Average disk25 96.91 0.50 481 7710 0.00 2.02&lt;BR /&gt;&lt;BR /&gt;Disk IO performance is mostly not "restricted" by  the "hba speed", but instead by the "maximum IO per second". &lt;BR /&gt;So always check how many IOs that the system is doing and what the size (in kbyte) of the (average) IO is.&lt;BR /&gt;&lt;BR /&gt;In this case, #of IO equals to 481, and the average size of the IO is 7710 (blocks/sec) /481 (#IO/sec) / 2 (1 block=512bytes) = 8k IOs.&lt;BR /&gt;&lt;BR /&gt;If the IO has to come from the disks the R5 lun consists off, instead of the diskarray's cache, then the max # of IO equals to, # of disks of the lun * 110 IO/sec.&lt;BR /&gt;&lt;BR /&gt;In the above case, the avserv is sufficiently low, 2.02 msec, that I would increase with scsimgr on 11.31 the max_q_depth for the disk25 lun, to see if more IO/sec can be reached without to much impacting avserv. (keep it lower then 10msec)&lt;BR /&gt;&lt;BR /&gt;Greetz,&lt;BR /&gt;Chris</description>
    <pubDate>Fri, 19 Nov 2010 19:44:04 GMT</pubDate>
    <dc:creator>chris huys_4</dc:creator>
    <dc:date>2010-11-19T19:44:04Z</dc:date>
    <item>
      <title>throughput question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715592#M385397</link>
      <description>server: rx6600&lt;BR /&gt;direct attached to P2000&lt;BR /&gt;&lt;BR /&gt;Using only one controller&lt;BR /&gt;&lt;BR /&gt;HBA 4gb.&lt;BR /&gt;&lt;BR /&gt;created R5 lun and presented it to rx6600, created vg and lvol (no striping).&lt;BR /&gt;&lt;BR /&gt;tried to load up the filesystem with dd copies and max throughput seems to be around 380mbps on the dd's both reads and writes.&lt;BR /&gt;&lt;BR /&gt;What should I be seeing with this type of test?&lt;BR /&gt;&lt;BR /&gt;also, as an aside  If I run a long-running db query I never see the blks/s get above 7710 &lt;BR /&gt;&lt;BR /&gt;11:02:29   device   %busy   avque   r+w/s  blks/s  avwait  avserv&lt;BR /&gt;Average    disk25   96.91    0.50     481    7710    0.00    2.02&lt;BR /&gt;&lt;BR /&gt;it's almost like the db is not taking advantage of the speed of the array....</description>
      <pubDate>Fri, 19 Nov 2010 16:50:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715592#M385397</guid>
      <dc:creator>Charles McCary</dc:creator>
      <dc:date>2010-11-19T16:50:39Z</dc:date>
    </item>
    <item>
      <title>Re: throughput question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715593#M385398</link>
      <description>Hi Charles,&lt;BR /&gt;&lt;BR /&gt;&amp;gt; HBA 4gb.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; created R5 lun and presented it to rx6600,&lt;BR /&gt;&amp;gt; created vg and lvol (no striping).&lt;BR /&gt;&lt;BR /&gt;&amp;gt; tried to load up the filesystem with dd&lt;BR /&gt;&amp;gt; copies and max throughput seems to be around&lt;BR /&gt;&amp;gt; 380mbps on the dd's both reads and writes.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; What should I be seeing with this type of&lt;BR /&gt;&amp;gt; test?&lt;BR /&gt;You mention a HBA speed of 4gbit.&lt;BR /&gt;&lt;BR /&gt;If only 1 fc hba is connected then offcourse, the 380MByte/sec is the maximum performance that can be attained, as 4gbit/10 = max performance = 400Mbyte/sec.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; also, as an aside If I run a long-running&lt;BR /&gt;&amp;gt; db query I never see the blks/s get above&lt;BR /&gt;&amp;gt; 7710 &lt;BR /&gt;&lt;BR /&gt;&amp;gt; 11:02:29 device %busy avque r+w/s blks/s&lt;BR /&gt;&amp;gt; avwait avserv&lt;BR /&gt;&amp;gt; Average disk25 96.91 0.50 481 7710 0.00 2.02&lt;BR /&gt;&lt;BR /&gt;Disk IO performance is mostly not "restricted" by  the "hba speed", but instead by the "maximum IO per second". &lt;BR /&gt;So always check how many IOs that the system is doing and what the size (in kbyte) of the (average) IO is.&lt;BR /&gt;&lt;BR /&gt;In this case, #of IO equals to 481, and the average size of the IO is 7710 (blocks/sec) /481 (#IO/sec) / 2 (1 block=512bytes) = 8k IOs.&lt;BR /&gt;&lt;BR /&gt;If the IO has to come from the disks the R5 lun consists off, instead of the diskarray's cache, then the max # of IO equals to, # of disks of the lun * 110 IO/sec.&lt;BR /&gt;&lt;BR /&gt;In the above case, the avserv is sufficiently low, 2.02 msec, that I would increase with scsimgr on 11.31 the max_q_depth for the disk25 lun, to see if more IO/sec can be reached without to much impacting avserv. (keep it lower then 10msec)&lt;BR /&gt;&lt;BR /&gt;Greetz,&lt;BR /&gt;Chris</description>
      <pubDate>Fri, 19 Nov 2010 19:44:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715593#M385398</guid>
      <dc:creator>chris huys_4</dc:creator>
      <dc:date>2010-11-19T19:44:04Z</dc:date>
    </item>
    <item>
      <title>Re: throughput question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715594#M385399</link>
      <description>Chris,&lt;BR /&gt;&lt;BR /&gt;thanks for the info....this particular lun has 10 disks.&lt;BR /&gt;&lt;BR /&gt;10*110 = 1100  Io's per second?&lt;BR /&gt;&lt;BR /&gt;where does the 110 come from again?&lt;BR /&gt;&lt;BR /&gt;Also, I've created an oracle data vg with one large LUN instead of several smaller luns.  Should I be concerned with the avsrv time for this lun in this config?</description>
      <pubDate>Fri, 19 Nov 2010 19:52:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715594#M385399</guid>
      <dc:creator>Charles McCary</dc:creator>
      <dc:date>2010-11-19T19:52:04Z</dc:date>
    </item>
    <item>
      <title>Re: throughput question</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715595#M385400</link>
      <description>Hi Charles,&lt;BR /&gt;&lt;BR /&gt;&amp;gt; 10*110 = 1100 Io's per second?&lt;BR /&gt;&lt;BR /&gt;Something like that. The raidlevel also plays a role, but I dont have immediate rules of thumbs, how it impacts performance ;)&lt;BR /&gt;&lt;BR /&gt;&amp;gt; where does the 110 come from again?&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=63977" target="_blank"&gt;http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=63977&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;"Remember, the average access time for a disk is about 8,000 to 10,000us (8-10milliseconds) "&lt;BR /&gt;&lt;BR /&gt;8-10 milliseconds for a disk means 100-110 IO/sec . The above was for the fc disks of a va4710 diskarray..&lt;BR /&gt;&lt;BR /&gt;More expensive "(fc) sas disks" do more IO per second (IOPS), something like 160-180 IO per second. (not to sure about the 160-180, could be more/less). And offcourse ssd do a lot more IOPS at least for reading, seen figures of over 2200+ IOPS. (but offcourse very expensive.. )&lt;BR /&gt;&lt;BR /&gt;Greetz,&lt;BR /&gt;Chris</description>
      <pubDate>Sat, 20 Nov 2010 23:44:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/throughput-question/m-p/4715595#M385400</guid>
      <dc:creator>chris huys_4</dc:creator>
      <dc:date>2010-11-20T23:44:32Z</dc:date>
    </item>
  </channel>
</rss>

