<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MSA 2050 Jumbo Frames with Windows Server 2016 in HPE MSA Storage</title>
    <link>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7078911#M13571</link>
    <description>&lt;P&gt;I am looking into the same issue and found that I was able to set a custom MTU size on an interface via NETSH&lt;/P&gt;&lt;P&gt;Windows SHOULD accept the custom value but I have not been able to test this yet.&amp;nbsp;&amp;nbsp; If you still have your storage array in a lab I would be interested in your results.&amp;nbsp;&amp;nbsp; We had to put ours into production in a hurry.&lt;/P&gt;&lt;P&gt;This might get you started:&lt;/P&gt;&lt;P&gt;let’s look at the interfaces along with what the current MTU is on each: netsh interface ipv4 show interfaces Then, let’s make the mtu 1464 persistently using the Idx number of the interface to change from the above command in quotes: netsh interface ipv4 set subinterface "10" mtu=1464 store=persistent&lt;/P&gt;</description>
    <pubDate>Wed, 12 Feb 2020 20:12:11 GMT</pubDate>
    <dc:creator>Branden23</dc:creator>
    <dc:date>2020-02-12T20:12:11Z</dc:date>
    <item>
      <title>MSA 2050 Jumbo Frames with Windows Server 2016</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7034731#M12618</link>
      <description>&lt;P&gt;I read in a technical paper that jumbo frames are supported at 8900 bytes.&lt;/P&gt;&lt;P&gt;I have enabled jumbo frames on the MSA, network switches, and the Windows Server 2016 NICs.&amp;nbsp;&lt;/P&gt;&lt;P&gt;However, there is no 8900 option in Windows. I selected 9014 as it seemed to be the best option among 1514, 4088, 9014, and 9336 bytes.&lt;/P&gt;&lt;P&gt;Is 9014 bytes the best option? What issues might I expect to indicate there is a problem?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Vint&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Feb 2019 14:02:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7034731#M12618</guid>
      <dc:creator>Vint</dc:creator>
      <dc:date>2019-02-14T14:02:34Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 2050 Jumbo Frames with Windows Server 2016</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7034767#M12623</link>
      <description>&lt;P&gt;As per the MSA best practice guide, you can see it need to set as 8900&lt;/P&gt;&lt;P&gt;&lt;A href="https://h20195.www2.hpe.com/v2/getpdf.aspx/A00015961ENW.pdf" target="_blank"&gt;https://h20195.www2.hpe.com/v2/getpdf.aspx/A00015961ENW.pdf&lt;/A&gt; (Page no 42)&lt;/P&gt;&lt;P&gt;Now this MTU size HPE fixed because they have tested it in LAB. Anything more than 8900 will create more and more packet fragmentation. Packet fragmentation will not only degrade performance, but flood the links with lots of packet fragmentation.&lt;/P&gt;&lt;P&gt;So I wouldn't recommend using anything else apart from MTU size 8900 with MSA 2050&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope this helps!&lt;BR /&gt;Regards&lt;BR /&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Subhajit&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;If you feel this was helpful please click the &lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;KUDOS!&lt;/STRONG&gt;&lt;/FONT&gt; thumb below!&lt;/P&gt;&lt;P&gt;***********************************************************************************&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Feb 2019 15:59:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7034767#M12623</guid>
      <dc:creator>SUBHAJIT KHANBARMAN_1</dc:creator>
      <dc:date>2019-02-14T15:59:49Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 2050 Jumbo Frames with Windows Server 2016</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7034786#M12624</link>
      <description>&lt;P&gt;Not only was that response not helpful, it was non-responsive.&amp;nbsp;&lt;SPAN&gt;It seems that given an almost completely homogenous HPE environment (all except the "Aruba" switches), a simple answer could be had.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;There is no place, either in the MSA (via cli: &lt;FONT face="courier new,courier"&gt;jumbo-frame enabled&lt;/FONT&gt;), or within the Windows Advanced properties for the NICs, to &lt;U&gt;specify&lt;/U&gt; an MTU value of 8900. You must select a value from the list provided and that list does not include 8900, anywhere.&lt;/P&gt;&lt;P&gt;I am using two HPE NICS in each of my three HPE servers to connect to my HPE MSA 2050 iSCSI storage. Neither the HPE Ethernet 10Gb 2-Port 535FLR-T or the HPE Ethernet 10Gb 2-Port 535T Adapter allow me to &lt;U&gt;specify&lt;/U&gt; 8900 in the Jumbo Packet pull-down property (see original post). Furthermore, the RoCE MTU Advanced Property also does not allow me to &lt;U&gt;specify&lt;/U&gt; a value of 8900. It allows me to select from 256, 512, 1024, 2048, 4096.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 14 Feb 2019 18:42:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7034786#M12624</guid>
      <dc:creator>Vint</dc:creator>
      <dc:date>2019-02-14T18:42:17Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 2050 Jumbo Frames with Windows Server 2016</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7078911#M13571</link>
      <description>&lt;P&gt;I am looking into the same issue and found that I was able to set a custom MTU size on an interface via NETSH&lt;/P&gt;&lt;P&gt;Windows SHOULD accept the custom value but I have not been able to test this yet.&amp;nbsp;&amp;nbsp; If you still have your storage array in a lab I would be interested in your results.&amp;nbsp;&amp;nbsp; We had to put ours into production in a hurry.&lt;/P&gt;&lt;P&gt;This might get you started:&lt;/P&gt;&lt;P&gt;let’s look at the interfaces along with what the current MTU is on each: netsh interface ipv4 show interfaces Then, let’s make the mtu 1464 persistently using the Idx number of the interface to change from the above command in quotes: netsh interface ipv4 set subinterface "10" mtu=1464 store=persistent&lt;/P&gt;</description>
      <pubDate>Wed, 12 Feb 2020 20:12:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/msa-2050-jumbo-frames-with-windows-server-2016/m-p/7078911#M13571</guid>
      <dc:creator>Branden23</dc:creator>
      <dc:date>2020-02-12T20:12:11Z</dc:date>
    </item>
  </channel>
</rss>

