MSA Storage
1753767 Members
5621 Online
108799 Solutions
New Discussion юеВ

Re: MSA 1050 With SSD poor performance

 
emac
Occasional Visitor

MSA 1050 With SSD poor performance

Hi,

2 HP Proliant 360 Servers (Hyper V cluster)

MSA 1050 storage with 4 SSD (Raid 10)

iSCSI 10Gb 4 dac's MSA<>Switch + 2 dac's Hyperv1<>Switch + 2 dac's Hyperv2<>Switch

Some images from MSA Setup and performance test on VM

 

I cant understand why the performance is soo bad on 4KiB Q1T1

Any Help? Tks

3 REPLIES 3

Re: MSA 1050 With SSD poor performance

I would suggest don't consider performance just by checking Q1T1 test because this is based on Queue Depth of 1 and in single thread which is not enough. You should check any device performance by supplying more data at a time or in other words Queue Depth value should be little more.

The result you have posted shown only throughput but if you hover your mouse on top of it you can see IOPs result as well.

if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency. 

Queue depth is the amount of outstanding I/O waiting for processing by the SAN. In other words, it is the count of how many pieces of data are stacked up waiting to get written to or read from the SAN. If the queue depth is low , it means that there are few (or no) I/Os waiting on the SAN.
Latency (or response time) is minimal as each I/O gets processed immediately. IOPS are reduced because the SAN is waiting on I/O from the application. If queue depth is high, there are outstanding I/Os waiting to be serviced by the SAN.
This increases IOPS, but adds latency because each I/O is waiting to be serviced instead of being serviced immediately.
The SAN performs optimally when there are enough I/Os outstanding to keep the SAN busy, but not so many that each I/O has to wait longer than desired to get serviced.

Typically, workloads can be defined by four categoriesтАФI/O size, reads vs. writes, sequential vs. random, and queue depth.

 

Note: I would also suggest to check CrystalDiskMark benchmark and tool manual to understand more about all tests.

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************


I work for HPE
Accept or Kudo
emac
Occasional Visitor

Re: MSA 1050 With SSD poor performance

Tks Subhajit

Ok, i realize what you mean but how can i improve the performance? The HyperV VM's performance is so poor.

Old P2000 with HDD 7,2k has the same performance on 4Kb.

To complete previous information, 4KiB Q1T1 Read ~ 8MB/s - 1850 IOPS

 

Re: MSA 1050 With SSD poor performance

As informed you earlier, don't consider performance just by checking Q1T1 test because this is based on Queue Depth of 1 and in single thread which is not enough. You should check any device performance by supplying more data at a time or in other words Queue Depth value should be little more.

Old P2000 with HDD 7,2k has the same performance on 4Kb because for single thread and Queue Depth 1 not much change.

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!

***********************************************************************************


I work for HPE
Accept or Kudo