MSA Storage
1753773 Members
5111 Online
108799 Solutions
New Discussion

I/O on MSA 2050 SAN

 
victor-c
Frequent Advisor

I/O on MSA 2050 SAN

Hi,

We have performance issues with our setup: MSA 2050, 24 x HDD Ultrastar
He10 (MDL).
To analyze th e problem we have created 1 Virt Volume on top of 4x RAID-1 raid
groups. (on same controller).
This Virt Volume is exported to Server via FC-16Gbps directly (without FCswitch).
This Vir t Volume is the only Volume managed by controller (Controller B).
Host OS: Debian 9.3
Host sees LUN, but is not used at all.
From Web Storage Management Utili ty we see IO activity on that Raid-Groups.
On another controller (Controller A), we have 1 Virt Volume on top of 1 Raid-
Group (RAID-6) with 16 HDD. This Virt Volume is used by another server with
low usage (Constant WRite ~30MBps, randome small Read's).

What is very strange is spikes of I/O - plesae see attached image.

Can you recomend something for better performence?

Thank you in advance

Victor

1 REPLY 1

Re: I/O on MSA 2050 SAN

There are many information missing when considering your setup. I would suggest you to go throughg the below best practice technical paper which contains many details to improve performance,

https://h20195.www2.hpe.com/v2/Getdocument.aspx?docname=a00015961enw

You are using only MDL SAS drives and this is direct attach setup so only Sequential workload you should get good performance. So you need to check many factors like this.

In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers need to be up to date with driver/firmware as well.

Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency. 

Typically, workloads can be defined by four categories—I/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft® SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.

The type of workload will affect the results of the performance measurement.

Check this below Customer Advisory and disable "In-band SES" ,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564

You can check the below Customer Advisory as well.........in many situations this helped to improve performance,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698

If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times by giving 2 minutes gap between each set of output collection along with MSA log and log a HPE support case. They will help you.

# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics

You can also utilize SMU performance check as well

 

Hope this helps!
Regards
Subhajit

If you feel this was helpful please click the KUDOS! thumb below!


I work for HPE
Accept or Kudo