<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Poor Write performance HP SAN MSA 2040 in HPE MSA Storage</title>
    <link>https://community.hpe.com/t5/hpe-msa-storage/poor-write-performance-hp-san-msa-2040/m-p/6927378#M10430</link>
    <description>&lt;P&gt;If I read this correctly, you have disk groups with 2 included disks in each, in RAID1, this means &lt;U&gt;&lt;STRONG&gt;all your writes are going to a *single* disk only&lt;/STRONG&gt;&lt;/U&gt;. The SSDs are used for read cache, so they can*t accelerate writes. Only the controller cache can (a bit). Once the cache is full, you directly write to a single physical disk.&lt;/P&gt;&lt;P&gt;Increasing the numder of disks (at least test with all your 6 disks in RAID10) or even tiering with the SSD will increase the performance, I'm sure.&lt;/P&gt;</description>
    <pubDate>Wed, 21 Dec 2016 16:32:41 GMT</pubDate>
    <dc:creator>Torsten.</dc:creator>
    <dc:date>2016-12-21T16:32:41Z</dc:date>
    <item>
      <title>Poor Write performance HP SAN MSA 2040</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/poor-write-performance-hp-san-msa-2040/m-p/6926723#M10427</link>
      <description>&lt;P&gt;Hi, I'm experiencing poor write performance on a MSA 2040 with 8 disks, configured as:&lt;BR /&gt;&lt;BR /&gt;2 x 200GB SSD (read cache)&lt;BR /&gt;6 x 900GB (12G DP 10K) RAID1 in pair&lt;BR /&gt;&lt;BR /&gt;I attach some screenshots of configuration.&lt;BR /&gt;&lt;A title="-System1.png (39 KB)" href="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134186/-System1.png" target="_blank"&gt;&lt;IMG src="https://filedb.experts-exchange.com/incoming/2016/12_w52/800_1134186/-System1.png" alt="-System1.png" border="0" /&gt;&lt;/A&gt;&lt;A title="-System2.png (25 KB)" href="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134187/-System2.png" target="_blank"&gt;&lt;IMG src="https://filedb.experts-exchange.com/incoming/2016/12_w52/800_1134187/-System2.png" alt="-System2.png" border="0" /&gt;&lt;/A&gt;&lt;A title="-Pool1.png (25 KB)" href="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134188/-Pool1.png" target="_blank"&gt;&lt;IMG src="https://filedb.experts-exchange.com/incoming/2016/12_w52/800_1134188/-Pool1.png" alt="-Pool1.png" border="0" /&gt;&lt;/A&gt;&lt;A title="-Pools2.png (23 KB)" href="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134189/-Pools2.png" target="_blank"&gt;&lt;IMG src="https://filedb.experts-exchange.com/incoming/2016/12_w52/800_1134189/-Pools2.png" alt="-Pools2.png" border="0" /&gt;&lt;/A&gt;&lt;BR /&gt;&lt;A title="-Home1.png (16 KB)" href="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134190/-Home1.png" target="_blank"&gt;&lt;IMG src="https://filedb.experts-exchange.com/incoming/2016/12_w52/800_1134190/-Home1.png" alt="-Home1.png" border="0" /&gt;&lt;/A&gt;&lt;BR /&gt;(why there's a red led indicator on B2-FC?..on details I see the same speed as other ports 16 Gb)&lt;BR /&gt;&lt;A title="-B2-FC.png (11 KB)" href="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134191/-B2-FC.png" target="_blank"&gt;&lt;IMG src="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134191/-B2-FC.png" alt="-B2-FC.png" border="0" /&gt;&lt;/A&gt;&lt;BR /&gt;Doing a write test and benchmark from a VM &amp;nbsp;I experience this speed&lt;BR /&gt;&lt;A title="-BenchMark-VM.png (66 KB)" href="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134192/-BenchMark-VM.png" target="_blank"&gt;&lt;IMG src="https://filedb.experts-exchange.com/incoming/2016/12_w52/1134192/-BenchMark-VM.png" alt="-BenchMark-VM.png" border="0" /&gt;&lt;/A&gt;&lt;BR /&gt;It's a poor performance, isn't IT?&lt;BR /&gt;&lt;BR /&gt;Can I speed up my system? Is this the maximum limit instead?&lt;BR /&gt;Any Help or suggestion is appreciated....&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Mirko&lt;/P&gt;</description>
      <pubDate>Mon, 19 Dec 2016 13:51:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/poor-write-performance-hp-san-msa-2040/m-p/6926723#M10427</guid>
      <dc:creator>MirkoK</dc:creator>
      <dc:date>2016-12-19T13:51:40Z</dc:date>
    </item>
    <item>
      <title>Re: Poor Write performance HP SAN MSA 2040</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/poor-write-performance-hp-san-msa-2040/m-p/6927378#M10430</link>
      <description>&lt;P&gt;If I read this correctly, you have disk groups with 2 included disks in each, in RAID1, this means &lt;U&gt;&lt;STRONG&gt;all your writes are going to a *single* disk only&lt;/STRONG&gt;&lt;/U&gt;. The SSDs are used for read cache, so they can*t accelerate writes. Only the controller cache can (a bit). Once the cache is full, you directly write to a single physical disk.&lt;/P&gt;&lt;P&gt;Increasing the numder of disks (at least test with all your 6 disks in RAID10) or even tiering with the SSD will increase the performance, I'm sure.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Dec 2016 16:32:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/poor-write-performance-hp-san-msa-2040/m-p/6927378#M10430</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2016-12-21T16:32:41Z</dc:date>
    </item>
    <item>
      <title>Re: Poor Write performance HP SAN MSA 2040</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/poor-write-performance-hp-san-msa-2040/m-p/6927447#M10431</link>
      <description>&lt;P&gt;I asked my experts and got a very thorough reply from one of our MSA engineering managers. &amp;nbsp;Here's what he said:&lt;/P&gt;
&lt;P&gt;It appears you are&amp;nbsp;testing against a POOL with 2x RAID 1 Disk-Groups, with 1x 200GB READ-CACHE.&amp;nbsp; This will not be an incredibly performant system.&amp;nbsp; You are basically getting the WRITE speed capability of 2 spinning media drives,&amp;nbsp; minus some overhead for having to do double WRITEs for the RAID 1.&lt;/P&gt;
&lt;P&gt;Now that said if you break down the test parameters I think there are some problems there as well.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Firstly,&amp;nbsp; the drive being tested is 40GiB in size.&amp;nbsp; This is on a POOL of 1800GBs in size (2x 900GB RAID 1 disk-groups). &amp;nbsp;You are “short stroking” the drives; the physical disks in this case hardly have to move the heads to get to all the data.&amp;nbsp; This results in reduced ‘seek’ times and improved performance.&lt;/LI&gt;
&lt;LI&gt;Secondly, the test size is 1GiB.&amp;nbsp; The MSA 2040 has 4GiB of cache,&amp;nbsp; so once this test is run enough times all the data will reside in cache for the sequential and will result in a high percentage of cache hits for random data.&lt;/LI&gt;
&lt;LI&gt;Lastly let’s look at the actual data.&amp;nbsp; In the case of a Queue of 1 and 1 thread (the last line in the test results) we are getting ~15MB of throughput on a 4k block random test.&amp;nbsp; Crunching the numbers, we get to ~3700IOPS.&amp;nbsp; REALLY?!?! From 4 spinning drives when 2 of them are duplicate WRITEs? I would say that you are having a MASSIVE boost in performance from the WRITE CACHE on the MSA. Once the caching effects would be defeated the WRITE performance would actually go down from here. When looking at a Random small block test, it’s less interesting to look at throughput (MB/s) and more important to look at IOPs as these are the actual bits of data needing processing.&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;It appears to me that the Crystal Disk Mark may be a good tool for testing an individual disk but&lt;U&gt; is not designed to scale to the large disk array&lt;/U&gt;.&lt;/P&gt;
&lt;P&gt;I would suggest a different tool.&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;IOMeter (&lt;A href="http://www.iometer.org/" target="_blank"&gt;http://www.iometer.org/&lt;/A&gt;) would do well and is used in a lot of benchmarks.&lt;/LI&gt;
&lt;LI&gt;IOZone (&lt;A href="http://www.iozone.org/" target="_blank"&gt;http://www.iozone.org/&lt;/A&gt;) is a nice tool that will show you the effects of different levels of cache by running a sweep of both IO sizes and block sizes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Unfortunately both of these tools have lots of knobs and dials which you have to understand.&lt;/P&gt;
&lt;P&gt;You also asks if there is a way to increase the performance.&amp;nbsp; The main answer there would be as Torsten also suggested, more spindles.&lt;/P&gt;
&lt;P&gt;In a real world workload, what I would suggest is to go with a single POOL of data.&amp;nbsp; And for expandability in the future it might be best to use a 6 drive RAID 10 as you can only put 16 Disk-Groups per pool.&lt;/P&gt;
&lt;P&gt;Pluses:&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You use all your spindles for all data, this will boost performance (after caching effects).&lt;/LI&gt;
&lt;LI&gt;You can use BOTH SSDs in one POOL for READ-CACHE, higher percentage of READ-CACHE overall&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Minus:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You lose the additional CACHE and processing of the second controller&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Hope that is useful to you.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Dec 2016 22:09:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/poor-write-performance-hp-san-msa-2040/m-p/6927447#M10431</guid>
      <dc:creator>CalvinZito</dc:creator>
      <dc:date>2016-12-21T22:09:31Z</dc:date>
    </item>
  </channel>
</rss>

