- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- MSA 2040 Tiering very low IOPS and awful performan...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2017 12:09 AM
тАО08-18-2017 12:09 AM
MSA 2040 Tiering very low IOPS and awful performance
something is wrong with the MSA 2040 performance tiering, i can only have about 5k IOPS out of it while i have another SAN P2000 G3 which is providing 16K for the same workload.
Both SAN are FC are connected through HP 8/24 SAN switches. can you help me diagnosis the issue ?as i am running out of ideas
we are using HP DL 360p Gen8 as hyperV host for the virtualised workload.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2017 04:24 AM - last edited on тАО06-29-2021 04:55 AM by Ramya_Heera
тАО08-18-2017 04:24 AM - last edited on тАО06-29-2021 04:55 AM by Ramya_Heera
Re: MSA 2040 Tiering very low IOPS and awful performance
Hello,
Please consider uploading configuration information so the group can help. Please also consider opening an HPE support case if your device is covered under support.
To generate text file of the config:
- Using Putty (or other SSH client), open a connection to your MSA and log in.
- Turn off page display pausing by typing 'set cli-parameters pager off'
- Turn on output logging from Putty to a text file
- Run the command 'show configuration'
- Upload the text file to a message on the thread
Note that this file will have information in it about your host names, IP addresses, device name, etc. If you are concerned about sharing that then you can edit out what you are concerened sharing or I can private message you a URL to upload the file to. You can also generate a support log and upload that to the URL I send. Attaching something to this message will allow more people to view and suggest options versus private URL. Note that I will be out of the office EOD today for 9 days. I'll take a look at anything that gets uploaded over the next few hours.
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2017 06:44 AM
тАО08-18-2017 06:44 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
kindly private message with a URL. so i can upload the file
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2017 06:57 AM
тАО08-18-2017 06:57 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
Done, please check private mail.
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2017 07:06 AM
тАО08-18-2017 07:06 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
done on the URL given.
I am having issues to opload the file here. it doesnt accept txt file format?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2017 07:14 AM
тАО08-18-2017 07:14 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
File received
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2017 08:28 AM
тАО08-18-2017 08:28 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
I've received and reviewed the config information. For the benefit of the group, I'll post a summary so far.
- Check your server host BIOS power profile settings to make sure all hosts are set to "Maximum Performance". This is the cause of 90% of the storage "performance issues" I have seen in the past.
- Looking at the MSA config itself, it looks pretty good. Everything looks healthy and you've done a good job keeping your RAID set sizes identical and to a power of 2.
- One thing you have done, which is not best practice, is that you have 10 and 15k disks in the same pools and they are different sizes.
- The disks in your 15k vdisks are totally full (as would be expected), the 10K pools have an even amount of space which is good and means they are balancing out
- This will cause all new writes to go only to the larger 10k disks. Reads will be unpredictable. Older data will come from both the 15k and 10k disks, newer data may only come from the 10k disks. While this isn't best practice, I'm not sure it's the cause of your issue. I would lean towards host power profile settings, zoning, or multipathing issues. Please check those items.
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-28-2017 06:58 AM
тАО08-28-2017 06:58 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
Any update on this situation following the suggestions I provided?
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-30-2017 02:07 AM
тАО08-30-2017 02:07 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
thanks mike for the followup, i can confirm the power settings in BIOS are already on maximum performance.
unfortunately i havent made much progress in correcting the issue.
Regarding the mix of 15K and 10K disk, how bad can they impact the performance, I presume any degradation would take it to the lowest speed which is 10K, also i made sure that those disks are not used in the same disk group to avoid any issue. the Auto tiering should be able to differentiate the disk groupd speed difference I guess
one question though, the fabrics and all the the FC components run at 8GB/s, do you think that could be the issue? i just noticed that the benchmarking of MSA 2040 on HPE site is only related to 16GB/s, i would be interested to know what they give on 8GB/S
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-30-2017 09:55 AM
тАО08-30-2017 09:55 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
A healthy 8GB fabric will do just fine. 16GB is faster (obviously) and overall more efficient but I've seen very heavy workloads do just fine on 8GB (and 4GB for that matter).
Here is how the MSA defines a "tier"
- Performance=SSD
- Standard=15k or 10k
- Archive=7.2k
You can't change this. Therefore; 10 and 15k drives, in the same pool, will been seen as the same tier. IO's will therefore have the potential to perform at the lowest common denominator. In this case 10k. Also, if you have different RAID types or set sizes in the same tier (within a pool) then IOs would porentially operate at that lowest common denominator as well.
We'd really need to dig into your "workload" more. How are sure that the workload is identical between your environments? What's your percentage of read/write? Block size? etc?
Can you supply me with some performance info? Like outputs from:
- show controller-statistics
- show host-port-statistics
- show disk-group-statistics
- show disk-statistics
Also, if you haven't already, you may want to open a support case.
Thanks.
I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-31-2017 12:51 AM
тАО08-31-2017 12:51 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
I logged a ticket in the past with both HP and Microsoft so we could have a look at the storage stack and hyperv configuration, unfortunately, nothing could be spotted. HP Sending me to hyperv configuration and Microsot point at the storage ....
below are is the output from the MSA,
# show controller-statistics
Durable ID CPU Load Power On Time (Secs) Bps IOPS Reads Writes Data Read Data Written Num Forwarded Cmds Reset Time Total Power On Hours
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
controller_A 44 2900318 324.5MB 3832 4418188910 4575820235 424.7TB 106.2TB 0 2017-07-28 19:33:31 25652.66
controller_B 19 1207474 73.6MB 2046 1433468006 1066603483 199.2TB 36.7TB 0 2017-08-17 09:47:49 343.44
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# show disk-group-statistics
Name Time Since Reset Reads Writes Data Read Data Written Bps IOPS I/O Resp Time Read Resp Time Write Resp Time Pages Allocated per Min Pages Deallocated per Min Pages Reclaimed Pages Unmapped per Minute
---------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------
Std_2B_SAS15k 1206685 17302808 9173891 9.5TB 1474.8GB 1461.7KB 15 9971 9971 14590 4 4 0 1
Std_1B_SAS15k 1206685 15519480 7706018 8979.8GB 1288.7GB 744.9KB 10 9405 9405 15880 4 4 0 0
Std_3B_SAS10k 1206685 227884150 39573989 45.5TB 8144.4GB 14.9MB 102 10585 10585 34155 0 5 0 3
Std_4B_SAS10k 1206685 225518419 40458834 48.5TB 8855.1GB 13.5MB 120 10620 10620 43998 0 20 0 4
Std_5B_SAS10k 1206685 220088369 41184696 46.4TB 8222.2GB 15.6MB 130 6294 6294 28429 0 13 0 2
Std_6B_SAS10k 1206685 235119736 40390953 50.0TB 8075.3GB 13.4MB 126 9791 9791 29310 42 24 0 1
Perf_1B_SSD 1206685 198645618 125453379 13.3TB 10.9TB 10.2MB 335 309 309 285 64 60 0 18
Perf_1A_SSD 2898258 778175036 646926659 55.2TB 39.1TB 14.1MB 350 391 391 368 60 65 0 24
Std_2A_SAS15k 2898258 148210577 65271077 30.0TB 7127.3GB 13.7MB 72 13589 13589 23126 5 5 0 1
Std_1A_SAS15k 2898258 141467892 62267560 29.5TB 6771.4GB 15.5MB 79 8111 8111 21809 1 1 0 0
Std_3A_SAS10k 2898258 605577801 185429748 90.5TB 19.5TB 54.5MB 351 54862 54862 161387 0 6 0 2
Std_4A_SAS10k 2898258 616170680 177800320 89.7TB 19.2TB 56.7MB 427 31991 31991 82427 16 12 0 3
Std_5A_SAS10k 2898258 632796693 174706145 98.9TB 19.4TB 61.8MB 377 39017 39017 116329 0 5 0 0
Std_6A_SAS10k 2898258 617625658 177947408 104.1TB 19.3TB 61.2MB 338 49344 49344 144668 41 17 0 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# show host-port-statistics
Durable ID Bps IOPS Reads Writes Data Read Data Written Queue Depth I/O Resp Time Read Resp Time Write Resp Time Reset Time
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
hostport_A1 74.1MB 1089 1105002619 1145216865 106.3TB 26.5TB 17 6303 12464 765 2017-07-28 19:33:31
hostport_A2 73.3MB 1087 1107728312 1145846388 106.3TB 26.6TB 17 6239 12316 701 2017-07-28 19:33:31
hostport_A3 74.1MB 1086 1103317451 1142942345 106.0TB 26.5TB 16 6137 12118 728 2017-07-28 19:33:31
hostport_A4 74.1MB 1088 1103319151 1142941682 106.0TB 26.5TB 10 6398 12640 715 2017-07-28 19:33:31
hostport_B1 15.7MB 490 358480199 266829239 49.8TB 9.1TB 1 1198 3566 310 2017-08-17 09:47:49
hostport_B2 15.8MB 488 358475298 266867726 49.8TB 9.1TB 0 1285 3764 324 2017-08-17 09:47:49
hostport_B3 16.0MB 490 358489851 266874394 49.8TB 9.1TB 1 1311 3706 367 2017-08-17 09:47:49
hostport_B4 16.2MB 489 358483769 266839986 49.8TB 9.1TB 3 1157 3315 322 2017-08-17 09:47:49
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Success: Command completed successfully. (2017-08-31 09:24:18)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2018 07:58 AM
тАО03-27-2018 07:58 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
I have the same problem with Performance Tiering - high latencies for both read and write, while the number of IOPS does not exceed 5000.
oli_4, have you solved your problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2018 08:54 AM - edited тАО03-27-2018 09:06 AM
тАО03-27-2018 08:54 AM - edited тАО03-27-2018 09:06 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
@oli_4 I have looked at the data that you shared but they are just single output from which we can't conclude anything. However Controller A CPU usage looks high but rest all data looks fine.
1st of all we shouldn't compare MSA2040 with P2000..........one is Linear array and another one is Virtual Array.
In order to troubleshoot Performance issue, there are many factors involved and it's not straight forward task. Some of the best practice to follow can be no hardware issue should exist, firmware need to be up to date, Connected system like Servers, SAN Switch all need to be up to date with driver/firmware as well.
Need to check what is the block size set at the Host and depends on that we should check if you want high IOPs or high throughput. Specifically, the smaller the I/O size, the more I/Os per second (IOPS) the SAN can process. However, the corollary to this is a decrease in throughput (as measured in MB/s). Conversely, as I/O size increases, IOPS decreases but throughput increases. When an I/O gets above a certain size, latency also increases as the time required to transport each I/O increases such that the disk itself is no longer the major influence on latency.
Typically, workloads can be defined by four categoriesтАФI/O size, reads vs. writes, sequential vs. random, and queue depth.
A typical application usually consists of a mix of reads and writes, and sequential and random.
For example, a Microsoft┬о SQL Server instance running an OLTP type workload might see disk IO that is 8k size, 80 percent read, and 100 percent random.
A disk backup target on the other hand might see disk IO that is 64k or 256K in size, with 90 percent writes and 100 percent sequential.
The type of workload will affect the results of the performance measurement.
Check this below Customer Advisory and disable "In-band SES" ,
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05306564
You can check the below Customer Advisory as well.........in many situations this helped to improve performance,
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03473698
If you have specific requirement and you want only SSD pages to deal with your IO then use 'Tier Affinity' on the particular volume.
Check for ODX settings in Windows system. As per SPOCK it clearly says that Microsoft Offline Data Transfers (ODX) is not supported with MSA2040. You check another Forum -> https://community.hpe.com/t5/MSA-Storage/Is-MSA-2040-Certified-for-Windows-ODX/td-p/6967522
https://h20272.www2.hpe.com/SPOCK/Content/ExportPDFView.aspx?Id=91936&typeId=2
Tried to give you step by step SPOCK links along with applied filter.
Download the PDF and in page 4 you will get it's clearly mentioned that Microsoft Offline Data Transfers (ODX) is not supported
If you still face performance issue then at the time of performance issue happening capture the below outputs atleast 10 to 15 times along with MSA log and log a HPE support case. They will help you.
# show controller-statistics
# show disk-statistics
# show host-port-statistics
# show vdisk-statistics
# show volume-statistics
@IT_Prof Not much data available to check from MSA perspective. So difficult to tell what is wrong in your situation.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-21-2020 07:08 AM - edited тАО01-21-2020 09:26 AM
тАО01-21-2020 07:08 AM - edited тАО01-21-2020 09:26 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
@SUBHAJIT KHANBARMAN_1 @HPSDMike
Afternoon.
We have a smiliar issue with our MSA 2050. Poor IOPS, real-world usage experience. Especially backing up to the MSA from Veeam. all 10GB iSCSI connections between hosts.
Would it be possible to upload a config output to a secure location so someone in the know can pore over it and make suggestions please?
thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-21-2020 07:55 AM
тАО01-21-2020 07:55 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-21-2020 08:54 PM
тАО01-21-2020 08:54 PM
Re: MSA 2040 Tiering very low IOPS and awful performance
Difficult to provide any recommendation without verifying entire setup.
As per the screenshot I see both Controller CPU usage very low so this looks fine. However I see all drive IOPs more than 200 but I don't know the drive model number so difficult for me to check if this is crossing threshold limit or not.
Another thing noticed all IOs seems to be happening through Host port A1 so you need to verify multi-path policy at the Host system. It seems you have fixed path policy set which need to modify as round-robin
You can always log a HPE Support case as well to get basic performance analysis.
Hope this helps!
Regards
Subhajit
I am an HPE employee
If you feel this was helpful please click the KUDOS! thumb below!
*********************************************************************
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-21-2020 11:53 PM
тАО01-21-2020 11:53 PM
Re: MSA 2040 Tiering very low IOPS and awful performance
thank you..
I will.
The drives are all ST80000NM0075 - Seagate 8TB 7200RPM SAS 12Gb/s 256MB Cache
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-22-2020 12:45 AM
тАО01-22-2020 12:45 AM
Re: MSA 2040 Tiering very low IOPS and awful performance
Thank you for sharing the model number of the drive and according to that this is Midline SAS drive which means Safe IOPs value approx 75 only but all of the drives shows more than 200 IOPs
It means high load on the array causing Performance issue. You need to control the load may be reduce backup jobs from VEEAM to fix this.
Hope this helps!
Regards
Subhajit
I am an HPE employee
If you feel this was helpful please click the KUDOS! thumb below!
*********************************************************************
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
