<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: xp512 Performance problems in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893914#M7308</link>
    <description>We have a similar or same problem with our XP512 connection, though ours is direct connect. PerfView shows clearly that any logical volume hits a ceiling of about 33MB/sec throughput. CPU, service times, request queue - all of these vary over time without an upper bound (well, CPU hits 100 sometimes). The LVOL_READ_BYTE_RATE shows a hard limit.&lt;BR /&gt;&lt;BR /&gt;Any ideas on raising this limit would be much appreciated. I was not able to link to the document in the previous reply - if someone would attach it that might help.&lt;BR /&gt;&lt;BR /&gt;Thomas</description>
    <pubDate>Fri, 23 May 2003 20:37:34 GMT</pubDate>
    <dc:creator>Tom Williams_3</dc:creator>
    <dc:date>2003-05-23T20:37:34Z</dc:date>
    <item>
      <title>xp512 Performance problems</title>
      <link>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893912#M7306</link>
      <description>Good day to you all, looking for some pointers.&lt;BR /&gt;&lt;BR /&gt;We have just recently upgraded from an xp256 to an xp512 &amp;amp; suprisingly have noticed a drop in performance. The host we are most concerned with has 1 GB fibre cards connected to a core brocade switch, the switch is also connected to the xp512. The 512 is laid up in such a manner that each sequential ldev falls on the next raid group (ie one ldev spans the four disks in a raid group, 2 will span 8 disks). The ldevs are set as open 8's. Unfortunately at present we are unable to get any more than 30mb/s through our brocade. We know it can't be the SAN as we've direct connected. We've connected a SUN box &amp;amp; managed 100Mb/s so I'm pretty convinced we need to patch lvm or tweek a kernel parm...&lt;BR /&gt;&lt;BR /&gt;Any ideas ??</description>
      <pubDate>Fri, 31 Jan 2003 14:17:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893912#M7306</guid>
      <dc:creator>Darren Chambers_1</dc:creator>
      <dc:date>2003-01-31T14:17:36Z</dc:date>
    </item>
    <item>
      <title>Re: xp512 Performance problems</title>
      <link>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893913#M7307</link>
      <description>How are your primary/alternate links laid out to the ports on the XP, remember that each pair of ports on the XP is controlled by a pair of i960 processors, one of which handles odd LUNs and the other handles even LUNs. If your LUN distribution is such that all the primary links to one port are even then you might see poor performance like you describe.&lt;BR /&gt;&lt;BR /&gt;If the Sun is using VxVM with DMP, then the IO is load balanced via round-robin algorithms which would explain why it doesn't suffer the same problem.&lt;BR /&gt;&lt;BR /&gt;Not sure if you can get at this link, but you should read this doc if you can:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www4.itrc.hp.com/service/iv/docDisplay.do?docId=/DE_SW_UX_swrec_EN_01_E/XP-Diskarrays.pdf" target="_blank"&gt;http://www4.itrc.hp.com/service/iv/docDisplay.do?docId=/DE_SW_UX_swrec_EN_01_E/XP-Diskarrays.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Fri, 31 Jan 2003 14:37:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893913#M7307</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2003-01-31T14:37:42Z</dc:date>
    </item>
    <item>
      <title>Re: xp512 Performance problems</title>
      <link>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893914#M7308</link>
      <description>We have a similar or same problem with our XP512 connection, though ours is direct connect. PerfView shows clearly that any logical volume hits a ceiling of about 33MB/sec throughput. CPU, service times, request queue - all of these vary over time without an upper bound (well, CPU hits 100 sometimes). The LVOL_READ_BYTE_RATE shows a hard limit.&lt;BR /&gt;&lt;BR /&gt;Any ideas on raising this limit would be much appreciated. I was not able to link to the document in the previous reply - if someone would attach it that might help.&lt;BR /&gt;&lt;BR /&gt;Thomas</description>
      <pubDate>Fri, 23 May 2003 20:37:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893914#M7308</guid>
      <dc:creator>Tom Williams_3</dc:creator>
      <dc:date>2003-05-23T20:37:34Z</dc:date>
    </item>
    <item>
      <title>Re: xp512 Performance problems</title>
      <link>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893915#M7309</link>
      <description>Answering my own question: HP confirms a bug in MeasureWare that limits the counter to 32767 (32KB - 1). Doesn't effect performance, just the counter. Fix just got sent to the low-priority queue :(</description>
      <pubDate>Thu, 03 Jul 2003 15:10:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/xp512-performance-problems/m-p/2893915#M7309</guid>
      <dc:creator>Tom Williams_3</dc:creator>
      <dc:date>2003-07-03T15:10:09Z</dc:date>
    </item>
  </channel>
</rss>

