MSA Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

MSA 2040 Tiering very low IOPS and awful performance

 
oli_4
Advisor

Re: MSA 2040 Tiering very low IOPS and awful performance

I logged a ticket in the past with both HP and Microsoft so we could have a look at the storage stack and hyperv configuration, unfortunately, nothing could be spotted. HP Sending me to hyperv configuration and Microsot point at the storage ....

below are is the output from the MSA,

# show controller-statistics
Durable ID     CPU Load   Power On Time (Secs)   Bps                IOPS             Reads            Writes           Data Read        Data Written     Num Forwarded Cmds  Reset Time                Total Power On Hours
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
controller_A   44         2900318                324.5MB            3832             4418188910       4575820235       424.7TB          106.2TB          0                   2017-07-28 19:33:31       25652.66
controller_B   19         1207474                73.6MB             2046             1433468006       1066603483       199.2TB          36.7TB           0                   2017-08-17 09:47:49       343.44
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

# show disk-group-statistics
Name           Time Since Reset Reads      Writes     Data Read Data Written Bps      IOPS I/O Resp Time Read Resp Time Write Resp Time    Pages Allocated per Min Pages Deallocated per Min Pages Reclaimed Pages Unmapped per Minute
----------------------------------------------------------------------------------------------------------------------------------------   --------------------------------------------------------------------------------------------
Std_2B_SAS15k  1206685          17302808   9173891    9.5TB     1474.8GB     1461.7KB 15   9971          9971           14590              4                       4                         0               1
Std_1B_SAS15k  1206685          15519480   7706018    8979.8GB  1288.7GB     744.9KB  10   9405          9405           15880              4                       4                         0               0
Std_3B_SAS10k  1206685          227884150  39573989   45.5TB    8144.4GB     14.9MB   102  10585         10585          34155              0                       5                         0               3
Std_4B_SAS10k  1206685          225518419  40458834   48.5TB    8855.1GB     13.5MB   120  10620         10620          43998              0                       20                        0               4
Std_5B_SAS10k  1206685          220088369  41184696   46.4TB    8222.2GB     15.6MB   130  6294          6294           28429              0                       13                        0               2
Std_6B_SAS10k  1206685          235119736  40390953   50.0TB    8075.3GB     13.4MB   126  9791          9791           29310              42                      24                        0               1
Perf_1B_SSD    1206685          198645618  125453379  13.3TB    10.9TB       10.2MB   335  309           309            285                64                      60                        0               18
Perf_1A_SSD    2898258          778175036  646926659  55.2TB    39.1TB       14.1MB   350  391           391            368                60                      65                        0               24
Std_2A_SAS15k  2898258          148210577  65271077   30.0TB    7127.3GB     13.7MB   72   13589         13589          23126              5                       5                         0               1
Std_1A_SAS15k  2898258          141467892  62267560   29.5TB    6771.4GB     15.5MB   79   8111          8111           21809              1                       1                         0               0
Std_3A_SAS10k  2898258          605577801  185429748  90.5TB    19.5TB       54.5MB   351  54862         54862          161387             0                       6                         0               2
Std_4A_SAS10k  2898258          616170680  177800320  89.7TB    19.2TB       56.7MB   427  31991         31991          82427              16                      12                        0               3
Std_5A_SAS10k  2898258          632796693  174706145  98.9TB    19.4TB       61.8MB   377  39017         39017          116329             0                       5                         0               0
Std_6A_SAS10k  2898258          617625658  177947408  104.1TB   19.3TB       61.2MB   338  49344         49344          144668             41                      17                        0               1

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

# show host-port-statistics

Durable ID           Bps                IOPS             Reads            Writes           Data Read        Data Written     Queue Depth      I/O Resp Time    Read Resp Time   Write Resp Time  Reset Time               
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
hostport_A1          74.1MB             1089             1105002619       1145216865       106.3TB          26.5TB           17               6303             12464            765              2017-07-28 19:33:31      
hostport_A2          73.3MB             1087             1107728312       1145846388       106.3TB          26.6TB           17               6239             12316            701              2017-07-28 19:33:31      
hostport_A3          74.1MB             1086             1103317451       1142942345       106.0TB          26.5TB           16               6137             12118            728              2017-07-28 19:33:31      
hostport_A4          74.1MB             1088             1103319151       1142941682       106.0TB          26.5TB           10               6398             12640            715              2017-07-28 19:33:31      
hostport_B1          15.7MB             490              358480199        266829239        49.8TB           9.1TB            1                1198             3566             310              2017-08-17 09:47:49      
hostport_B2          15.8MB             488              358475298        266867726        49.8TB           9.1TB            0                1285             3764             324              2017-08-17 09:47:49      
hostport_B3          16.0MB             490              358489851        266874394        49.8TB           9.1TB            1                1311             3706             367              2017-08-17 09:47:49      
hostport_B4          16.2MB             489              358483769        266839986        49.8TB           9.1TB            3                1157             3315             322              2017-08-17 09:47:49      
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully. (2017-08-31 09:24:18)

 

 

IT_Prof
Occasional Visitor

Re: MSA 2040 Tiering very low IOPS and awful performance

I have the same problem with Performance Tiering - high latencies for both read and write, while the number of IOPS does not exceed 5000.

oli_4, have you solved your problem?

 

 

Re: MSA 2040 Tiering very low IOPS and awful performance

@oli_4 I have looked at the data that you shared but they are just single output from which we can't conclude anything. However Controller A CPU usage looks high but rest all data looks fine.

1st of all we shouldn't compare MSA2040 with P2000..........one is Linear array and another one is Virtual Array.

In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers, SAN Switch all need to be up to date with driver/firmware as well.

Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency. 

Typically, workloads can be defined by four categories—I/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft® SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.

The type of workload will affect the results of the performance measurement.

Check this below Customer Advisory and disable "In-band SES" ,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564

You can check the below Customer Advisory as well.........in many situations this helped to improve performance,

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698

If you have specific requirement and you want only SSD pages to deal with your IO then use 'Tier Affinity' on the particular volume.

Check for ODX settings in Windows system. As per SPOCK it clearly says that Microsoft Offline Data Transfers (ODX) is not  supported with MSA2040. You check another Forum -> https://community.hpe.com/t5/MSA-Storage/Is-MSA-2040-Certified-for-Windows-ODX/td-p/6967522

https://h20272.www2.hpe.com/spock/Content/Default.aspx?LinId=HP%20MSA%202040%20SAN%20Storage%20FC&CompPath=Operating%20System/Windows%20Server%202016&ConfigType=Fibre%20Channel%20Connectivity||iSCSI%20Connectivity||Serial%20Attached%20SCSI%20Conn&deprecated=false

https://h20272.www2.hpe.com/SPOCK/Content/ExportPDFView.aspx?Id=91936&typeId=2

Tried to give you step by step SPOCK links along with applied filter.

Download the PDF and in page 4 you will get it's clearly mentioned that Microsoft Offline Data Transfers (ODX) is not  supported

If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times along with MSA log and log a HPE support case. They will help you.

# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics

 

@IT_Prof Not much data available to check from MSA perspective. So difficult to tell what is wrong in your situation.


Accept or Kudo