- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- Re: IOPS drop on MSA2060 Pool B
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2023 01:38 AM
08-31-2023 01:38 AM
IOPS drop on MSA2060 Pool B
Hello everyone, thank you for taking the time to read me.
I have an MSA2060 iSCSI 10Gb (All-Flash 10x 3.84TB ) on a DL380 gen 10 for vSPHERE 8.
I was thinking of using pool A (RAID6; 4+1) for an Oracle VM and poll B (RAID6; 4+1) for a SQL VM.
And I notice an impressive loss of IOPS on Pool B, no matter which controller I turn off.
Controller A+B: Pool A 80k IOPS / Pool B 80k IOPS
Controller A: Pool A 80k / Pool B 29k
Controller B: Pool A 78k / Pool B 29k
All streams are on Jumbo Frame, ESXi and MSA are up to date.
I did the tests by swapping the ESXi NICs, the controllers in the MSA, and the network jumpers.
I have exactly the same problem with Direct-Attach.
Any ideas?
- Tags:
- 2060 IOPS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-04-2023 01:32 AM
09-04-2023 01:32 AM
Re: IOPS drop on MSA2060 Pool B
Hi,
I understand that issue follows Pool B volume, even if you shut down storage controller B.
This rules out controller B hardware or network path being the suspect.
Could you check whether the mac address of the 8 host ports of both the controllers are unique?
Are you using any utility like IO meter to measure the performance?
Is the path selection policy set to round robin and IOPS value set to 1 in ESXi?
Default setting would be MRU and IOPS value of 1000.
I am unable to think of any reason for this drop in performance just in Pool B if the configuration is identical.
It would be good to get the MSA logs reviewed by HPE support after logging a support case, if it has not been done already.
Swapping the controller slots might create issues with Pool.
Please avoid this troubleshooting step in the future.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-05-2023 12:54 AM
09-05-2023 12:54 AM
Re: IOPS drop on MSA2060 Pool B
Hello, thank you for your reply.
To be in the best conditions I started again "from scratch".
So a new installation on the ESXi, or before configuring the iSCSI, I set the RoudRobin as default on my configuration:
esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_ALUA
Once done, I configure
2 vSwitches with 1 NIC 1 dgp and 1 vmk each.
I start the software iscsi with the 2 vmk, and the ip of controller A and B in dynamic target.
once the config is ok, and the two pools A and B mounted, i configure 2 VM server2022 with VMWARE tool and windows, up to date.
I restart the ESX, and start testing.
So controller with A and B :
Pool A and B synchronous: 77k IOPS
Pool A and B asynchronous: 88k IOPS
With A or B shutdown:
Pool A or Pool B: 20k IOPS
So it's not as good as in VMW_PSP_MRU where I had at least Pool A which remained functional whatever the controller was shut down.
So I tried with 1 vSwitch with 1 active Nic and 1 standby NIC, same thing.
I reset the VMW_PSP_MRU to default:
esxcli storage nmp satp set --default-psp=VMW_PSP_MRU --satp=VMW_SATP_ALUA
and banks too
for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.600c`; do esxcli storage nmp device set -E --device=$i; done
ESX reboot
Controller A and B on
Pool A and B synchronous: 70k IOPS
Pool A and B asynchronous: 77k IOPS
Controller A or B off:
ool A and B synchronous: 15k IOPS
Pool A and B asynchronous: 20k IOPS
So I've lost the fact that Pool A was still viable regardless of which controller was switched off.
I now also have a doubt about the ESXi system, I'm going to ask the question in parallel on the VMWARE side.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-05-2023 01:22 AM
09-05-2023 01:22 AM
Re: IOPS drop on MSA2060 Pool B
Hi,
Direct attach is not a supported configuration with ESXi.
If there are connections from at least 2 host ports per controller to each host server recommended PSP is round-robin and IOPS value of 1. https://kb.vmware.com/s/article/2069356. You could try disabling jumbo frames end to end as well as disabling flow control on switch if its enabled for testing purposes . I would suggest logging an HPE support case to review the MSA logs once to check for their suggestions.
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
