- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- MSA 2062 Extremely Slow
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-28-2022 08:59 AM - last edited on 03-04-2022 12:13 AM by support_s
02-28-2022 08:59 AM - last edited on 03-04-2022 12:13 AM by support_s
MSA 2062 Extremely Slow
Hello,
I seem to be running into a strange performance issue with our MSA 2062 anytime I try using the FC (10K) drives regardless of the RAID type and configuration. I have tried using RAID 1, 5, 6, 10 and also the MSADP+ and they all behave the same way regardless of the disk configuration.
Hardware Details and Configuration:
The array came with 2 built-in SSD drives and 14, 10K, 2.4TB Seagate drives. We are using 4, 10G SFP connections to 2 dedicated 10G Cisco switches using 2 dedicated iSCSI subnets as recommended by the HPE MSA 1060/2060/2062 STORAGE ARRAYS in page 4.
We are connecting 3 servers to the MSA 2062 array using dedicated and routed iSCSI VLANs. We have enabled jumbo frames for each 10g connections and all servers are able to reach all iSCSI data IP’s.
MSA
iSCSI_A1 10.10.10.21
iSCSI_A2 10.10.11.21
iSCSI_B1 10.10.10.22
iSCSI_B2 10.10.11.22
SERVER01
iSCSI_A 10.10.10.11
A1 10.10.10.21
B1 10.10.10.22
iSCSI_B 10.10.11.11
A2 10.10.11.21
B2 10.10.11.22
SERVER02
iSCSI_A 10.10.10.12
A1 10.10.10.21
B1 10.10.10.22
iSCSI_B 10.10.11.12
A2 10.10.11.21
B2 10.10.11.22
SERVER03
iSCSI_A 10.10.10.13
A1 10.10.10.21
B1 10.10.10.22
iSCSI_B 10.10.11.13
A2 10.10.11.21
B2 10.10.11.22
I have followed the MSA installation guide, Storage Management Guide and the HPE MSA Storage Configuration and Best Practices for VMware vSphere guides and have applied all required settings to the MSA 2062 with no success.
If I use a RAID 1 group using the 2 SSD’s, I have no issues at all when moving a VM to this VMFS store. However, if I use any of the 10K disks, regardless of the RAID configuration I choose, the performance is terribly slow. This is not a connection issue as I have ruled that out.
The array, hard disks and all servers are up to date with their firmware levels. The MSA health check tool does not report any issues and confirmed the array is in a healthy state. I find it hard to believe that 10k disks are underperforming.
Any help is greatly appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2022 12:39 AM
03-03-2022 12:39 AM
Re: MSA 2062 Extremely Slow
Hi,
I hope that the controller firmware is the latest version IN110P001,
It has fixes for IO stall which leads to latency till controller is restarted.
https://support.hpe.com/hpesc/public/docDisplay?docId=a00119276en_us&docLocale=en_US
Ensure that path selection policy is set to Round robin and IOPS limit is set to 1
https://kb.vmware.com/s/article/2069356
Since its MSA 2062 it would be having performance tier license.
You may consider creating a single Pool using SSD disks and SAS disks to improve overall performance.
Execute the below commands few times when ever you experience latency:
show disk-statistics
Abort any scrub jobs before executing the above command.
If the disk IOPS is above 150 consistently it indicates an overloaded array.
You could try mapping an MSA test volume directly to a windows vm and perform an IO meter test to check the performance.
While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company