<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disk Performance Issue in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/disk-performance-issue/m-p/3394058#M13807</link>
    <description>You may be able to improve performance by using ioctl to increase the queuedepth from the default of 8 to something greater, such as 16 or 32.  Usually, frame like EMC are slow on a per LUN bases, so people do LVM striping across multiple LUNs to gain performance.  Your dd performance won't tell you file system performance, which your users are likely using.  You can use sam to change the mount options to improve performance, but beware that the downside is increase risk of data loss in the event of a crash or power loss.  If you are using PVlinks, you can do static load balancing by putting the primary path for some LUNs on path 1 and the primary path for other LUNs on path 2.</description>
    <pubDate>Thu, 07 Oct 2004 17:57:21 GMT</pubDate>
    <dc:creator>Ted Buis</dc:creator>
    <dc:date>2004-10-07T17:57:21Z</dc:date>
    <item>
      <title>Disk Performance Issue</title>
      <link>https://community.hpe.com/t5/disk-enclosures/disk-performance-issue/m-p/3394057#M13806</link>
      <description>Hi. i'm using an EMC Storage connected thru fibre card. &lt;BR /&gt;My users complaining that the read/write is very slow.&lt;BR /&gt;than i do a test using dd command :&lt;BR /&gt;and at the same time i open the sar -d.&lt;BR /&gt;I have put the result in the attachment, and my kernel info.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Tue, 05 Oct 2004 20:42:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/disk-performance-issue/m-p/3394057#M13806</guid>
      <dc:creator>Muda Ikhsan_3</dc:creator>
      <dc:date>2004-10-05T20:42:33Z</dc:date>
    </item>
    <item>
      <title>Re: Disk Performance Issue</title>
      <link>https://community.hpe.com/t5/disk-enclosures/disk-performance-issue/m-p/3394058#M13807</link>
      <description>You may be able to improve performance by using ioctl to increase the queuedepth from the default of 8 to something greater, such as 16 or 32.  Usually, frame like EMC are slow on a per LUN bases, so people do LVM striping across multiple LUNs to gain performance.  Your dd performance won't tell you file system performance, which your users are likely using.  You can use sam to change the mount options to improve performance, but beware that the downside is increase risk of data loss in the event of a crash or power loss.  If you are using PVlinks, you can do static load balancing by putting the primary path for some LUNs on path 1 and the primary path for other LUNs on path 2.</description>
      <pubDate>Thu, 07 Oct 2004 17:57:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/disk-performance-issue/m-p/3394058#M13807</guid>
      <dc:creator>Ted Buis</dc:creator>
      <dc:date>2004-10-07T17:57:21Z</dc:date>
    </item>
  </channel>
</rss>

