- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- Re: MSA 2040 Tiering very low IOPS and awful perfo...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2017 12:51 AM
08-31-2017 12:51 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
I logged a ticket in the past with both HP and Microsoft so we could have a look at the storage stack and hyperv configuration, unfortunately, nothing could be spotted. HP Sending me to hyperv configuration and Microsot point at the storage ....
below are is the output from the MSA,
# show controller-statistics
Durable ID CPU Load Power On Time (Secs) Bps IOPS Reads Writes Data Read Data Written Num Forwarded Cmds Reset Time Total Power On Hours
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
controller_A 44 2900318 324.5MB 3832 4418188910 4575820235 424.7TB 106.2TB 0 2017-07-28 19:33:31 25652.66
controller_B 19 1207474 73.6MB 2046 1433468006 1066603483 199.2TB 36.7TB 0 2017-08-17 09:47:49 343.44
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# show disk-group-statistics
Name Time Since Reset Reads Writes Data Read Data Written Bps IOPS I/O Resp Time Read Resp Time Write Resp Time Pages Allocated per Min Pages Deallocated per Min Pages Reclaimed Pages Unmapped per Minute
---------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------
Std_2B_SAS15k 1206685 17302808 9173891 9.5TB 1474.8GB 1461.7KB 15 9971 9971 14590 4 4 0 1
Std_1B_SAS15k 1206685 15519480 7706018 8979.8GB 1288.7GB 744.9KB 10 9405 9405 15880 4 4 0 0
Std_3B_SAS10k 1206685 227884150 39573989 45.5TB 8144.4GB 14.9MB 102 10585 10585 34155 0 5 0 3
Std_4B_SAS10k 1206685 225518419 40458834 48.5TB 8855.1GB 13.5MB 120 10620 10620 43998 0 20 0 4
Std_5B_SAS10k 1206685 220088369 41184696 46.4TB 8222.2GB 15.6MB 130 6294 6294 28429 0 13 0 2
Std_6B_SAS10k 1206685 235119736 40390953 50.0TB 8075.3GB 13.4MB 126 9791 9791 29310 42 24 0 1
Perf_1B_SSD 1206685 198645618 125453379 13.3TB 10.9TB 10.2MB 335 309 309 285 64 60 0 18
Perf_1A_SSD 2898258 778175036 646926659 55.2TB 39.1TB 14.1MB 350 391 391 368 60 65 0 24
Std_2A_SAS15k 2898258 148210577 65271077 30.0TB 7127.3GB 13.7MB 72 13589 13589 23126 5 5 0 1
Std_1A_SAS15k 2898258 141467892 62267560 29.5TB 6771.4GB 15.5MB 79 8111 8111 21809 1 1 0 0
Std_3A_SAS10k 2898258 605577801 185429748 90.5TB 19.5TB 54.5MB 351 54862 54862 161387 0 6 0 2
Std_4A_SAS10k 2898258 616170680 177800320 89.7TB 19.2TB 56.7MB 427 31991 31991 82427 16 12 0 3
Std_5A_SAS10k 2898258 632796693 174706145 98.9TB 19.4TB 61.8MB 377 39017 39017 116329 0 5 0 0
Std_6A_SAS10k 2898258 617625658 177947408 104.1TB 19.3TB 61.2MB 338 49344 49344 144668 41 17 0 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# show host-port-statistics
Durable ID Bps IOPS Reads Writes Data Read Data Written Queue Depth I/O Resp Time Read Resp Time Write Resp Time Reset Time
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
hostport_A1 74.1MB 1089 1105002619 1145216865 106.3TB 26.5TB 17 6303 12464 765 2017-07-28 19:33:31
hostport_A2 73.3MB 1087 1107728312 1145846388 106.3TB 26.6TB 17 6239 12316 701 2017-07-28 19:33:31
hostport_A3 74.1MB 1086 1103317451 1142942345 106.0TB 26.5TB 16 6137 12118 728 2017-07-28 19:33:31
hostport_A4 74.1MB 1088 1103319151 1142941682 106.0TB 26.5TB 10 6398 12640 715 2017-07-28 19:33:31
hostport_B1 15.7MB 490 358480199 266829239 49.8TB 9.1TB 1 1198 3566 310 2017-08-17 09:47:49
hostport_B2 15.8MB 488 358475298 266867726 49.8TB 9.1TB 0 1285 3764 324 2017-08-17 09:47:49
hostport_B3 16.0MB 490 358489851 266874394 49.8TB 9.1TB 1 1311 3706 367 2017-08-17 09:47:49
hostport_B4 16.2MB 489 358483769 266839986 49.8TB 9.1TB 3 1157 3315 322 2017-08-17 09:47:49
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully. (2017-08-31 09:24:18)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-27-2018 07:58 AM
03-27-2018 07:58 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
I have the same problem with Performance Tiering - high latencies for both read and write, while the number of IOPS does not exceed 5000.
oli_4, have you solved your problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-27-2018 08:54 AM - edited 03-27-2018 09:06 AM
03-27-2018 08:54 AM - edited 03-27-2018 09:06 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
@oli_4 I have looked at the data that you shared but they are just single output from which we can't conclude anything. However Controller A CPU usage looks high but rest all data looks fine.
1st of all we shouldn't compare MSA2040 with P2000..........one is Linear array and another one is Virtual Array.
In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers, SAN Switch all need to be up to date with driver/firmware as well.
Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency.
Typically, workloads can be defined by four categories—I/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft® SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.
The type of workload will affect the results of the performance measurement.
Check this below Customer Advisory and disable "In-band SES" ,
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564
You can check the below Customer Advisory as well.........in many situations this helped to improve performance,
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698
If you have specific requirement and you want only SSD pages to deal with your IO then use 'Tier Affinity' on the particular volume.
Check for ODX settings in Windows system. As per SPOCK it clearly says that Microsoft Offline Data Transfers (ODX) is not supported with MSA2040. You check another Forum -> https://community.hpe.com/t5/MSA-Storage/Is-MSA-2040-Certified-for-Windows-ODX/td-p/6967522
https://h20272.www2.hpe.com/SPOCK/Content/ExportPDFView.aspx?Id=91936&typeId=2
Tried to give you step by step SPOCK links along with applied filter.
Download the PDF and in page 4 you will get it's clearly mentioned that Microsoft Offline Data Transfers (ODX) is not supported
If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times along with MSA log and log a HPE support case. They will help you.
# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics
@IT_Prof Not much data available to check from MSA perspective. So difficult to tell what is wrong in your situation.
I work for HPE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-21-2020 07:08 AM - edited 01-21-2020 09:26 AM
01-21-2020 07:08 AM - edited 01-21-2020 09:26 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
@SUBHAJIT KHANBARMAN_1 @HPSDMike
Afternoon.
We have a smiliar issue with our MSA 2050. Poor IOPS, real-world usage experience. Especially backing up to the MSA from Veeam. all 10GB iSCSI connections between hosts.
Would it be possible to upload a config output to a secure location so someone in the know can pore over it and make suggestions please?
thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-21-2020 07:55 AM
01-21-2020 07:55 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-21-2020 08:54 PM
01-21-2020 08:54 PM
Re: MSA 2040 Tiering very low IOPS and awful performance
Difficult to provide any recommendation without verifying entire setup.
As per the screenshot I see both Controller CPU usage very low so this looks fine. However I see all drive IOPs more than 200 but I don't know the drive model number so difficult for me to check if this is crossing threshold limit or not.
Another thing noticed all IOs seems to be happening through Host port A1 so you need to verify multi-path policy at the Host system. It seems you have fixed path policy set which need to modify as round-robin
You can always log a HPE Support case as well to get basic performance analysis.
Hope this helps!
Regards
Subhajit
I am an HPE employee
If you feel this was helpful please click the KUDOS! thumb below!
*********************************************************************
I work for HPE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-21-2020 11:53 PM
01-21-2020 11:53 PM
Re: MSA 2040 Tiering very low IOPS and awful performance
thank you..
I will.
The drives are all ST80000NM0075 - Seagate 8TB 7200RPM SAS 12Gb/s 256MB Cache
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-22-2020 12:45 AM
01-22-2020 12:45 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
Thank you for sharing the model number of the drive and according to that this is Midline SAS drive which means Safe IOPs value approx 75 only but all of the drives shows more than 200 IOPs
It means high load on the array causing Performance issue. You need to control the load may be reduce backup jobs from VEEAM to fix this.
Hope this helps!
Regards
Subhajit
I am an HPE employee
If you feel this was helpful please click the KUDOS! thumb below!
*********************************************************************
I work for HPE
- « Previous
-
- 1
- 2
- Next »