MSA Storage

Performance not as expected on MSA 2052


Performance not as expected on MSA 2052

Hello All

We are refreshing the hardware for our main DB and I'm moving it to new DL380 G10 with 4 x 10GBps interfaces connected to four 10GBps ports - two ports on each of the two controllers of the MSA 2052 via HPE FlexFabric 5700 40XG 2QSFP+ Switch.

I have 9 800SSDs disk and 11 SAS 1.8TB. 8 of SSDs in raid 10 one spare and 10 SAS in raid 10 and 1 spare. They are part of one pool for Performance and Archive tire. I know it recommended to have it balanced on more pools but I need all the disk space accessible on this single server for the DBs

I have two LUNs - one with Performance affinity and one with Archive presented to the server.

I installed Windows 2012R2 on USB connected Kingstone 240 SSD to test the performance and use ATTO to asses it with 8K I/O size(this is the block our DB uses).

Installed al the latest firmware and drivers on from Gen10 SPP enabled multipath.

On both LUNs I get only 4.5k to 6.8k IO/s.  

If I run it against the USB SSD I have theOS on I get more than 20k IO/s reads and rights

I would expect this storage with this type of drives and this configuration to give a lot more performance than the €100 USB SSD drive.

Most probably I'm doing something wrong so I will appreciate any advise on how to improve the performance. 



Re: Performance not as expected on MSA 2052

Hi Stefan,

I understand that its an old post and issue might have been resolved already.

May I know whats the read and write latency in ms that you can see if an IO meter test is run on the volume?

Have you tried disabling tier affinity and using no affinity option to check whether it helps?

Please disable ODX feature if its enabled as its not supported by MSA

I am an HPE Employee

Accept or Kudo


Re: Performance not as expected on MSA 2052

Hello Stefan,

I hope the below provided response has helped you.

Kindly let me know if you have further queries  or the status.

I am an HPE Employee

Accept or Kudo


Re: Performance not as expected on MSA 2052

Hello Arun

No, it does not make a difference.

I had installed RedHat 8 on a local 240GB HP SSD.

I was told by our DBA that the command below should run for around 3 seconds and this is the result for a good filesystem speed. It is writing 18432 blocks of size of 16k

dd if=/dev/zero of=./test.out bs=16k count=18432 oflag=dsync
301989888 bytes (302 MB, 288 MiB) copied, 26.4979 s, 11.4 MB/s

On the MSA as you see I get more than 26 seconds which is far off.

What I can see is if I run multiple commands in parallel I get the same performance on each one of them which is the same as if I run one. So this will mean to me that something is limiting the performance on the storage for single tread access(for the lack of better description) and if I run it 4 times in parallel or a single every time the result is similar for all commands.

If I run the command to use big packets 16M and 144 of them I get

dd if=/dev/zero of=./test.out bs=16M count=144 oflag=dsync
2415919104 bytes (2.4 GB, 2.2 GiB) copied, 2.79892 s, 863 MB/s

So the bandwidth of the iSCSI is not a problem. To me, it looks like something is limiting the IOPS per single process.

If I run it against the root filesystem which is on the local SSD I get

dd if=/dev/zero of=./test.out bs=16k count=18432 oflag=dsync
301989888 bytes (302 MB, 288 MiB) copied, 2.98083 s, 101 MB/s

Which is what a good result should look like.

Do you know is there any logic on controllers the MSA2052 that limits the IOPS resources which a single host can utilize?

Re: Performance not as expected on MSA 2052

@Stefan_Ajderev's difficult to provide any guidance to Performance stuff in public forum. However, some advice,

1> You are comparing performance of DAS with SAN which is not correct. DAS is always faster compare to SAN as there are lots of other factor matters out of which networking/bandwidth is important thing.

2> Whenever it comes to performance then 1st thing to understand what you want to check or achieve here ? for you throughput is important or IOPs. You can't get both at the same time. The size of the I/O to and from the SAN impacts the measurable performance statistics of the SAN. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s).
Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency.

3> You need to check Queue Depth value set at the Host HBA or Network card level

4> You need to check what is installed on the LUN created over the volumes presented from MSA. It means application also matters means load balancer and DB can't be same.

5> You need to check what kind of RAID you are dealing with.

Anyway there are lot of things to check when it comes to performance meaure.


Hope this helps!

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!


I work for HPE
Accept or Kudo