<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic IOPS drop on MSA2060 Pool B in HPE MSA Storage</title>
    <link>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195480#M16749</link>
    <description>&lt;P&gt;Hello everyone, thank you for taking the time to read me.&lt;/P&gt;&lt;P&gt;I have an MSA2060 iSCSI 10Gb (All-Flash 10x 3.84TB ) on a DL380 gen 10 for vSPHERE 8.&lt;/P&gt;&lt;P&gt;I was thinking of using pool A (RAID6; 4+1) for an Oracle VM and poll B (RAID6; 4+1) for a SQL VM.&lt;/P&gt;&lt;P&gt;And I notice an impressive loss of IOPS on Pool B, no matter which controller I turn off.&lt;/P&gt;&lt;P&gt;Controller A+B: Pool A 80k IOPS / Pool B 80k IOPS&lt;BR /&gt;Controller A: Pool A 80k / Pool B 29k&lt;BR /&gt;Controller B: Pool A 78k / Pool B 29k&lt;/P&gt;&lt;P&gt;All streams are on Jumbo Frame, ESXi and MSA are up to date.&lt;/P&gt;&lt;P&gt;I did the tests by swapping the ESXi NICs, the controllers in the MSA, and the network jumpers.&lt;BR /&gt;I have exactly the same problem with Direct-Attach.&lt;/P&gt;&lt;P&gt;Any ideas?&lt;/P&gt;</description>
    <pubDate>Thu, 31 Aug 2023 08:38:19 GMT</pubDate>
    <dc:creator>OPTISECURITE</dc:creator>
    <dc:date>2023-08-31T08:38:19Z</dc:date>
    <item>
      <title>IOPS drop on MSA2060 Pool B</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195480#M16749</link>
      <description>&lt;P&gt;Hello everyone, thank you for taking the time to read me.&lt;/P&gt;&lt;P&gt;I have an MSA2060 iSCSI 10Gb (All-Flash 10x 3.84TB ) on a DL380 gen 10 for vSPHERE 8.&lt;/P&gt;&lt;P&gt;I was thinking of using pool A (RAID6; 4+1) for an Oracle VM and poll B (RAID6; 4+1) for a SQL VM.&lt;/P&gt;&lt;P&gt;And I notice an impressive loss of IOPS on Pool B, no matter which controller I turn off.&lt;/P&gt;&lt;P&gt;Controller A+B: Pool A 80k IOPS / Pool B 80k IOPS&lt;BR /&gt;Controller A: Pool A 80k / Pool B 29k&lt;BR /&gt;Controller B: Pool A 78k / Pool B 29k&lt;/P&gt;&lt;P&gt;All streams are on Jumbo Frame, ESXi and MSA are up to date.&lt;/P&gt;&lt;P&gt;I did the tests by swapping the ESXi NICs, the controllers in the MSA, and the network jumpers.&lt;BR /&gt;I have exactly the same problem with Direct-Attach.&lt;/P&gt;&lt;P&gt;Any ideas?&lt;/P&gt;</description>
      <pubDate>Thu, 31 Aug 2023 08:38:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195480#M16749</guid>
      <dc:creator>OPTISECURITE</dc:creator>
      <dc:date>2023-08-31T08:38:19Z</dc:date>
    </item>
    <item>
      <title>Re: IOPS drop on MSA2060 Pool B</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195691#M16757</link>
      <description>&lt;P style="margin: 0;"&gt;Hi,&lt;/P&gt;
&lt;P style="margin: 0;"&gt;I understand that issue follows Pool B volume, even if you shut down storage controller B.&lt;BR /&gt;This rules out controller B hardware or network path being the suspect.&lt;BR /&gt;Could you check whether the mac address of the 8 host ports of both the controllers are unique?&lt;BR /&gt;Are you using any utility like IO meter to measure the performance?&lt;BR /&gt;Is the path selection policy set to round robin and IOPS value set to 1 in ESXi?&lt;BR /&gt;Default setting would be MRU and IOPS value of 1000.&lt;BR /&gt;I am unable to think of any reason for this drop in performance just in Pool B if the configuration is identical.&lt;BR /&gt;It would be good to get the MSA logs reviewed by HPE support after logging a support case, if it has not been done already.&lt;BR /&gt;Swapping the controller slots might create issues with Pool.&lt;BR /&gt;Please avoid this troubleshooting step in the future.&lt;/P&gt;</description>
      <pubDate>Mon, 04 Sep 2023 08:32:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195691#M16757</guid>
      <dc:creator>ArunKKR</dc:creator>
      <dc:date>2023-09-04T08:32:06Z</dc:date>
    </item>
    <item>
      <title>Re: IOPS drop on MSA2060 Pool B</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195781#M16771</link>
      <description>&lt;P&gt;Hello, thank you for your reply.&lt;/P&gt;&lt;P&gt;To be in the best conditions I started again "from scratch".&lt;/P&gt;&lt;P&gt;So a new installation on the ESXi, or before configuring the iSCSI, I set the RoudRobin as default on my configuration:&lt;BR /&gt;esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_ALUA&lt;/P&gt;&lt;P&gt;Once done, I configure&lt;BR /&gt;2 vSwitches with 1 NIC 1 dgp and 1 vmk each.&lt;BR /&gt;I start the software iscsi with the 2 vmk, and the ip of controller A and B in dynamic target.&lt;/P&gt;&lt;P&gt;once the config is ok, and the two pools A and B mounted, i configure 2 VM server2022 with VMWARE tool and windows, up to date.&lt;/P&gt;&lt;P&gt;I restart the ESX, and start testing.&lt;/P&gt;&lt;P&gt;So controller with A and B :&lt;BR /&gt;Pool A and B synchronous: 77k IOPS&lt;BR /&gt;Pool A and B asynchronous: 88k IOPS&lt;/P&gt;&lt;P&gt;With A or B shutdown:&lt;BR /&gt;Pool A or Pool B: 20k IOPS&lt;/P&gt;&lt;P&gt;So it's not as good as in VMW_PSP_MRU where I had at least Pool A which remained functional whatever the controller was shut down.&lt;/P&gt;&lt;P&gt;So I tried with 1 vSwitch with 1 active Nic and 1 standby NIC, same thing.&lt;/P&gt;&lt;P&gt;I reset the VMW_PSP_MRU to default:&lt;BR /&gt;esxcli storage nmp satp set --default-psp=VMW_PSP_MRU --satp=VMW_SATP_ALUA&lt;/P&gt;&lt;P&gt;and banks too&lt;/P&gt;&lt;P&gt;for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.600c`; do esxcli storage nmp device set -E --device=$i; done&lt;/P&gt;&lt;P&gt;ESX reboot&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Controller A and B on&lt;BR /&gt;Pool A and B synchronous: 70k IOPS&lt;BR /&gt;Pool A and B asynchronous: 77k IOPS&lt;/P&gt;&lt;P&gt;Controller A or B off:&lt;BR /&gt;ool A and B synchronous: 15k IOPS&lt;BR /&gt;Pool A and B asynchronous: 20k IOPS&lt;/P&gt;&lt;P&gt;So I've lost the fact that Pool A was still viable regardless of which controller was switched off.&lt;/P&gt;&lt;P&gt;I now also have a doubt about the ESXi system, I'm going to ask the question in parallel on the VMWARE side.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Sep 2023 07:54:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195781#M16771</guid>
      <dc:creator>OPTISECURITE</dc:creator>
      <dc:date>2023-09-05T07:54:06Z</dc:date>
    </item>
    <item>
      <title>Re: IOPS drop on MSA2060 Pool B</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195785#M16772</link>
      <description>&lt;P style="margin: 0;"&gt;Hi,&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;Direct attach is not a supported configuration with ESXi.&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;If there are connections from at least 2 host ports per controller to each host server recommended PSP is round-robin and IOPS value of 1.&amp;nbsp; &lt;A href="https://kb.vmware.com/s/article/2069356" target="_blank"&gt;https://kb.vmware.com/s/article/2069356&lt;/A&gt;. You could try disabling jumbo frames end to end as well as disabling flow control on switch if its enabled for testing purposes . I would suggest logging an HPE support case to review the MSA logs once to check for their suggestions.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 05 Sep 2023 08:22:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/iops-drop-on-msa2060-pool-b/m-p/7195785#M16772</guid>
      <dc:creator>ArunKKR</dc:creator>
      <dc:date>2023-09-05T08:22:04Z</dc:date>
    </item>
  </channel>
</rss>

