MSA Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

Poor Write performance HP SAN MSA 2040

 
MirkoK
Occasional Visitor

Poor Write performance HP SAN MSA 2040

Hi, I'm experiencing poor write performance on a MSA 2040 with 8 disks, configured as:

2 x 200GB SSD (read cache)
6 x 900GB (12G DP 10K) RAID1 in pair

I attach some screenshots of configuration.
-System1.png-System2.png-Pool1.png-Pools2.png
-Home1.png
(why there's a red led indicator on B2-FC?..on details I see the same speed as other ports 16 Gb)
-B2-FC.png
Doing a write test and benchmark from a VM  I experience this speed
-BenchMark-VM.png
It's a poor performance, isn't IT?

Can I speed up my system? Is this the maximum limit instead?
Any Help or suggestion is appreciated....

Thanks
Mirko

2 REPLIES 2
Torsten.
Acclaimed Contributor

Re: Poor Write performance HP SAN MSA 2040

If I read this correctly, you have disk groups with 2 included disks in each, in RAID1, this means all your writes are going to a *single* disk only. The SSDs are used for read cache, so they can*t accelerate writes. Only the controller cache can (a bit). Once the cache is full, you directly write to a single physical disk.

Increasing the numder of disks (at least test with all your 6 disks in RAID10) or even tiering with the SSD will increase the performance, I'm sure.


Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
HPEStorageGuy
Neighborhood Admin

Re: Poor Write performance HP SAN MSA 2040

I asked my experts and got a very thorough reply from one of our MSA engineering managers.  Here's what he said:

It appears you are testing against a POOL with 2x RAID 1 Disk-Groups, with 1x 200GB READ-CACHE.  This will not be an incredibly performant system.  You are basically getting the WRITE speed capability of 2 spinning media drives,  minus some overhead for having to do double WRITEs for the RAID 1.

Now that said if you break down the test parameters I think there are some problems there as well.

  • Firstly,  the drive being tested is 40GiB in size.  This is on a POOL of 1800GBs in size (2x 900GB RAID 1 disk-groups).  You are “short stroking” the drives; the physical disks in this case hardly have to move the heads to get to all the data.  This results in reduced ‘seek’ times and improved performance.
  • Secondly, the test size is 1GiB.  The MSA 2040 has 4GiB of cache,  so once this test is run enough times all the data will reside in cache for the sequential and will result in a high percentage of cache hits for random data.
  • Lastly let’s look at the actual data.  In the case of a Queue of 1 and 1 thread (the last line in the test results) we are getting ~15MB of throughput on a 4k block random test.  Crunching the numbers, we get to ~3700IOPS.  REALLY?!?! From 4 spinning drives when 2 of them are duplicate WRITEs? I would say that you are having a MASSIVE boost in performance from the WRITE CACHE on the MSA. Once the caching effects would be defeated the WRITE performance would actually go down from here. When looking at a Random small block test, it’s less interesting to look at throughput (MB/s) and more important to look at IOPs as these are the actual bits of data needing processing. 

It appears to me that the Crystal Disk Mark may be a good tool for testing an individual disk but is not designed to scale to the large disk array.

I would suggest a different tool. 

  • IOMeter (http://www.iometer.org/) would do well and is used in a lot of benchmarks.
  • IOZone (http://www.iozone.org/) is a nice tool that will show you the effects of different levels of cache by running a sweep of both IO sizes and block sizes.

Unfortunately both of these tools have lots of knobs and dials which you have to understand.

You also asks if there is a way to increase the performance.  The main answer there would be as Torsten also suggested, more spindles.

In a real world workload, what I would suggest is to go with a single POOL of data.  And for expandability in the future it might be best to use a 6 drive RAID 10 as you can only put 16 Disk-Groups per pool.

Pluses: 

  • You use all your spindles for all data, this will boost performance (after caching effects).
  • You can use BOTH SSDs in one POOL for READ-CACHE, higher percentage of READ-CACHE overall

Minus:

  • You lose the additional CACHE and processing of the second controller

Hope that is useful to you.