<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic EVA 3000 performance in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/eva-3000-performance/m-p/3453203#M71652</link>
    <description>Hello,&lt;BR /&gt;&lt;BR /&gt;We have a SAN built using HP EVA3000 controllers. The storage is configured into three virtual disks (VRaid5) of size 450MB each. These virtual disks are presented to Linux servers attached to the fibre network as SCSI disks sda, sdb and sdc respectively. &lt;BR /&gt;&lt;BR /&gt;HP EVA3000 product data sheet:&lt;BR /&gt;&lt;A href="ftp://ftp.compaq.com/pub/products/storageworks/eva3000/5982-6587EN.pdf" target="_blank"&gt;ftp://ftp.compaq.com/pub/products/storageworks/eva3000/5982-6587EN.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;The product data sheet indicates:&lt;BR /&gt;Sustained I/O and MB throughput: Up to 141 K IOPS and up to 335 MB/sec throughput per EVA3000 controller pair&lt;BR /&gt;&lt;BR /&gt;root@node1:/mnt/eva3 20&amp;gt; /sbin/hdparm -Tt /dev/sdc&lt;BR /&gt;&lt;BR /&gt;/dev/sdc:&lt;BR /&gt; Timing buffer-cache reads:   2620 MB in  2.00 seconds = 1310.00 MB/sec&lt;BR /&gt; Timing buffered disk reads:  496 MB in  3.00 seconds = 165.33 MB/sec--------------&amp;gt;&amp;gt;335 MB/sec throughput per controller pair. Disk /dev/sdc is served by only one controller and so the throughput is half of 335, approx 165MB/sec. Is my understanding correct?&lt;BR /&gt;&lt;BR /&gt;Bonnie++ output indicates approx 66MB/s throughput for Sequential-Input--block (reads). Should not this number be close to 165MB/sec?&lt;BR /&gt;The Linux server is equipped with 1GB HBA- so this may not be a bottleneck. The filesystem overhead might be a factor for getting less throughput rate too. The scsi disk "sdc" is configured as Raid5 on the storage- so additional read/writes might be issued for storing parity data. Considering this overhead the actual throughput achieved might be less than 165MB/s for writes but it should be close to 165 for reads. Wouldn't it?&lt;BR /&gt;&lt;BR /&gt;The SAN storage will be basically used to store lots of data files of sizes between 35-70MB and clients would copy several MB data to storage everday. How should I start to tune the storage/linux server for improving performance in my case? Please share your thoughts!&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Raj</description>
    <pubDate>Fri, 31 Dec 2004 01:53:46 GMT</pubDate>
    <dc:creator>Raj Kumar_1</dc:creator>
    <dc:date>2004-12-31T01:53:46Z</dc:date>
    <item>
      <title>EVA 3000 performance</title>
      <link>https://community.hpe.com/t5/operating-system-linux/eva-3000-performance/m-p/3453203#M71652</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;We have a SAN built using HP EVA3000 controllers. The storage is configured into three virtual disks (VRaid5) of size 450MB each. These virtual disks are presented to Linux servers attached to the fibre network as SCSI disks sda, sdb and sdc respectively. &lt;BR /&gt;&lt;BR /&gt;HP EVA3000 product data sheet:&lt;BR /&gt;&lt;A href="ftp://ftp.compaq.com/pub/products/storageworks/eva3000/5982-6587EN.pdf" target="_blank"&gt;ftp://ftp.compaq.com/pub/products/storageworks/eva3000/5982-6587EN.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;The product data sheet indicates:&lt;BR /&gt;Sustained I/O and MB throughput: Up to 141 K IOPS and up to 335 MB/sec throughput per EVA3000 controller pair&lt;BR /&gt;&lt;BR /&gt;root@node1:/mnt/eva3 20&amp;gt; /sbin/hdparm -Tt /dev/sdc&lt;BR /&gt;&lt;BR /&gt;/dev/sdc:&lt;BR /&gt; Timing buffer-cache reads:   2620 MB in  2.00 seconds = 1310.00 MB/sec&lt;BR /&gt; Timing buffered disk reads:  496 MB in  3.00 seconds = 165.33 MB/sec--------------&amp;gt;&amp;gt;335 MB/sec throughput per controller pair. Disk /dev/sdc is served by only one controller and so the throughput is half of 335, approx 165MB/sec. Is my understanding correct?&lt;BR /&gt;&lt;BR /&gt;Bonnie++ output indicates approx 66MB/s throughput for Sequential-Input--block (reads). Should not this number be close to 165MB/sec?&lt;BR /&gt;The Linux server is equipped with 1GB HBA- so this may not be a bottleneck. The filesystem overhead might be a factor for getting less throughput rate too. The scsi disk "sdc" is configured as Raid5 on the storage- so additional read/writes might be issued for storing parity data. Considering this overhead the actual throughput achieved might be less than 165MB/s for writes but it should be close to 165 for reads. Wouldn't it?&lt;BR /&gt;&lt;BR /&gt;The SAN storage will be basically used to store lots of data files of sizes between 35-70MB and clients would copy several MB data to storage everday. How should I start to tune the storage/linux server for improving performance in my case? Please share your thoughts!&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Raj</description>
      <pubDate>Fri, 31 Dec 2004 01:53:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/eva-3000-performance/m-p/3453203#M71652</guid>
      <dc:creator>Raj Kumar_1</dc:creator>
      <dc:date>2004-12-31T01:53:46Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 performance</title>
      <link>https://community.hpe.com/t5/operating-system-linux/eva-3000-performance/m-p/3453204#M71653</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I noticed too that bonnie++ did not fully utilize the available FC connection. &lt;BR /&gt;&lt;BR /&gt;Please check by running top in addition if bonnie++ takes all available CPU time in comparision to hdparm and ensure that you use a reasonable big file for bonnie. Additional try a test where you use more than one bonnie, perhaps three in parallel. &lt;BR /&gt;&lt;BR /&gt;Bye &lt;BR /&gt;&lt;BR /&gt;Oli</description>
      <pubDate>Mon, 03 Jan 2005 04:29:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/eva-3000-performance/m-p/3453204#M71653</guid>
      <dc:creator>Oliver Schwank</dc:creator>
      <dc:date>2005-01-03T04:29:30Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 performance</title>
      <link>https://community.hpe.com/t5/operating-system-linux/eva-3000-performance/m-p/3453205#M71654</link>
      <description>Data sheet always specify the maximum possible performance.  Performance to the array's cache is always much faster than sustained performance to the drives, so you want to understand if your benchmarks and the specifications avoid the cache or make unrealistic use of the cache compared to your application.  Performace on the EVA is always faster if you don't mirror the cache, which is generally a bad idea, but can be appropriate for certain temp files.  Write performance is fastest with mirrored data, using all the disks that are possible in the array within a single vRAID group. So, to answer your question, no, I never believe you can expect to see real world performance close to the maximum rates.  But to maximize performance for writes, Mirror, use the maximum number of fast disks possible in the array.</description>
      <pubDate>Tue, 04 Jan 2005 11:45:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/eva-3000-performance/m-p/3453205#M71654</guid>
      <dc:creator>Ted Buis</dc:creator>
      <dc:date>2005-01-04T11:45:59Z</dc:date>
    </item>
  </channel>
</rss>

