<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Storage Spaces Direct with in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6988102#M21072</link>
    <description>&lt;P&gt;I never found a solution to this and thus the S2D cluster was never stable enough to go into production.&amp;nbsp; I had multipul tickets raised with Microsoft to find a solution.&amp;nbsp; Nodes BSOD randomly, applied patches (even undistributed patches) - November 2017.&amp;nbsp; There are saying its a driver related issue and are ploughing through blue screen memory dumps.&lt;/P&gt;&lt;P&gt;Combine that with S2D been pulled from 2016 v1709, erm no thanks.&amp;nbsp; Plan is to reconverge the hardware in 3 years when the current cluster becomes a DR, and S2D would of been out for few years by then.&lt;/P&gt;&lt;P&gt;We have since lost all confidence in S2D as a solution and presently purchasing a 2x node G10 360 SOFS cluster with 3x D3710 chassis directly attached.&amp;nbsp; Ripping all the storage out of the 4x nodes and putting it into D3710's.&amp;nbsp; Ditching the 556FLR's and replacing with 640FLR Mellanox through out.&amp;nbsp; The 556FLR will be repurposed elsewhere in less intensive workloads and go towards switch decommissioning.&lt;/P&gt;&lt;P&gt;RIP S2D (for now).&lt;/P&gt;</description>
    <pubDate>Fri, 10 Nov 2017 16:55:21 GMT</pubDate>
    <dc:creator>richsmif76</dc:creator>
    <dc:date>2017-11-10T16:55:21Z</dc:date>
    <item>
      <title>Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6975332#M21024</link>
      <description>&lt;P&gt;I have a HP FlexFabric 2 port 10GB 556FLR-SFP+ and FlexFabric 557SFP+ 10Gb in a DL380 G9.&lt;/P&gt;&lt;P&gt;4 of these servers are configured with 2016 DC and trying to get decent throughput on S2D.&amp;nbsp; Write speed from 1 volume to another is 150Mb/s.&amp;nbsp; I have 52x 1.2TB SAS drives and 12x 800GB SSD.&lt;/P&gt;&lt;P&gt;I have rebuilt the servers from scratch, and trying to configure QoS, when i configure Get-NetAdapterQos, nothing is returned.&amp;nbsp; I have run the latest SUM (Aug 17) so technically should have the latest drivers.&lt;/P&gt;&lt;P&gt;Originally i configured with VMM, I've lost complete faint in this setup as the networking performance is dreadful.&lt;/P&gt;&lt;P&gt;Using HP 5700 switches, 2x of them.&lt;/P&gt;&lt;P&gt;Please help.&lt;/P&gt;</description>
      <pubDate>Thu, 31 Aug 2017 19:45:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6975332#M21024</guid>
      <dc:creator>richsmif76</dc:creator>
      <dc:date>2017-08-31T19:45:29Z</dc:date>
    </item>
    <item>
      <title>Re: Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6975343#M21025</link>
      <description>&lt;P&gt;The other problem i have is Enable-NetAdapterQos does not work.&amp;nbsp; Says it cant find it?????????&lt;/P&gt;&lt;P&gt;PS C:\Windows\system32&amp;gt; Get-NetAdapter -InterfaceAlias "embedded flex*" | ft name&lt;/P&gt;&lt;P&gt;name&lt;BR /&gt;----&lt;BR /&gt;Embedded FlexibleLOM 1 Port 2&lt;BR /&gt;Embedded FlexibleLOM 1 Port 1&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;PS C:\Windows\system32&amp;gt; Enable-NetAdapterQos -InterfaceAlias "Embedded FlexibleLOM 1 Port 2"&lt;BR /&gt;Enable-NetAdapterQos : No MSFT_NetAdapterQosSettingData objects found with property 'Name' equal to 'Embedded FlexibleLOM 1 Port 2'.&amp;nbsp; Verify the value of the property and retry.&lt;BR /&gt;At line:1 char:1&lt;BR /&gt;+ Enable-NetAdapterQos -InterfaceAlias "Embedded FlexibleLOM 1 Port 2"&lt;BR /&gt;+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; + CategoryInfo&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : ObjectNotFound: (Embedded FlexibleLOM 1 Port 2:String) [Enable-NetAdapterQos], CimJobException&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; + FullyQualifiedErrorId : CmdletizationQuery_NotFound_Name,Enable-NetAdapterQos&lt;/P&gt;&lt;P&gt;PS C:\Windows\system32&amp;gt;&lt;/P&gt;</description>
      <pubDate>Thu, 31 Aug 2017 20:13:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6975343#M21025</guid>
      <dc:creator>richsmif76</dc:creator>
      <dc:date>2017-08-31T20:13:22Z</dc:date>
    </item>
    <item>
      <title>Re: Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6975739#M21031</link>
      <description>&lt;P&gt;Having raised the issue with Microsoft directly, it seems our network cards are not on the supported list even though they support SMB Direct.&lt;/P&gt;&lt;P&gt;Can anyone confirm if they are tried S2D on the following cards, or what settings they used?&lt;/P&gt;&lt;P&gt;FlexFabric 556FLR-SFP+&lt;/P&gt;&lt;P&gt;FlexFabric 557-SFP+&lt;/P&gt;</description>
      <pubDate>Wed, 06 Sep 2017 10:54:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6975739#M21031</guid>
      <dc:creator>richsmif76</dc:creator>
      <dc:date>2017-09-06T10:54:57Z</dc:date>
    </item>
    <item>
      <title>Re: Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6977664#M21041</link>
      <description>&lt;P&gt;Same message error when I try &lt;SPAN&gt;Enable-NetAdapterQos&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;I'm using HPE FlexFabric 10Gb 2-port 556FLR-SFP+ Adapter which is RDMA(RoCE) compatible.&lt;/P&gt;&lt;P&gt;Did you get any feedback from Microsoft ?&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 22 Sep 2017 15:07:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6977664#M21041</guid>
      <dc:creator>skillful</dc:creator>
      <dc:date>2017-09-22T15:07:10Z</dc:date>
    </item>
    <item>
      <title>Re: Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6987164#M21071</link>
      <description>&lt;P&gt;I'm also running into the same issue with the get-netadapterqos and enable-netadapterqos commands, using the same 556FLR-SFP+ adapters.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Did either of you find a solution for this, or a workaround?&lt;/P&gt;&lt;P&gt;I had previously configured my adapters for storage spaces using the Emulex OneCommand Manager utility, and between that and the switch had configured much of the QOS stuff before.&amp;nbsp; I wonder if the issue is just that the driver for this adapter doesn't support configuration with these powershell commands, but can be configured properly using other tools.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Nov 2017 16:49:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6987164#M21071</guid>
      <dc:creator>Aurock</dc:creator>
      <dc:date>2017-11-01T16:49:56Z</dc:date>
    </item>
    <item>
      <title>Re: Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6988102#M21072</link>
      <description>&lt;P&gt;I never found a solution to this and thus the S2D cluster was never stable enough to go into production.&amp;nbsp; I had multipul tickets raised with Microsoft to find a solution.&amp;nbsp; Nodes BSOD randomly, applied patches (even undistributed patches) - November 2017.&amp;nbsp; There are saying its a driver related issue and are ploughing through blue screen memory dumps.&lt;/P&gt;&lt;P&gt;Combine that with S2D been pulled from 2016 v1709, erm no thanks.&amp;nbsp; Plan is to reconverge the hardware in 3 years when the current cluster becomes a DR, and S2D would of been out for few years by then.&lt;/P&gt;&lt;P&gt;We have since lost all confidence in S2D as a solution and presently purchasing a 2x node G10 360 SOFS cluster with 3x D3710 chassis directly attached.&amp;nbsp; Ripping all the storage out of the 4x nodes and putting it into D3710's.&amp;nbsp; Ditching the 556FLR's and replacing with 640FLR Mellanox through out.&amp;nbsp; The 556FLR will be repurposed elsewhere in less intensive workloads and go towards switch decommissioning.&lt;/P&gt;&lt;P&gt;RIP S2D (for now).&lt;/P&gt;</description>
      <pubDate>Fri, 10 Nov 2017 16:55:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6988102#M21072</guid>
      <dc:creator>richsmif76</dc:creator>
      <dc:date>2017-11-10T16:55:21Z</dc:date>
    </item>
    <item>
      <title>Re: Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6996989#M21129</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;can you try this Command in Powershell (Administrator Mode) : Get-NetAdapter | fl and poste the Output here if any.&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;M.Fuhr&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Feb 2018 13:58:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6996989#M21129</guid>
      <dc:creator>HP-Schrauber</dc:creator>
      <dc:date>2018-02-14T13:58:24Z</dc:date>
    </item>
    <item>
      <title>Re: Storage Spaces Direct with</title>
      <link>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6996990#M21130</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;i have a little Script for the Storage Space Direct konfiguration on Server 2016 Datacenter Nodes.&lt;BR /&gt;In my Case the Adapter Name are : 10GBPort1 and 10GBPort2 my Hardware is one HP 546FLR-SFP Card.&lt;/P&gt;&lt;P&gt;Script for Konfigure the QOS etc.&lt;/P&gt;&lt;P&gt;# Clear previous configurations&lt;BR /&gt;Remove-NetQosTrafficClass&lt;BR /&gt;Remove-NetQosPolicy -Confirm:$False&lt;BR /&gt;&lt;BR /&gt;# Enable DCB, if it’s not already done through the wizard.&lt;BR /&gt;Install-WindowsFeature Data-Center-Bridging&lt;BR /&gt;&lt;BR /&gt;# Disable the DCBx setting:&lt;BR /&gt;Set-NetQosDcbxSetting -Willing 0&lt;BR /&gt;&lt;BR /&gt;# Create QoS policies and tag each type of traffic with the relevant priority&lt;BR /&gt;New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3&lt;BR /&gt;New-NetQosPolicy "Cluster" -Cluster -PriorityValue8021Action 5&lt;BR /&gt;New-NetQosPolicy "DEFAULT" -Default -PriorityValue8021Action 3&lt;BR /&gt;New-NetQosPolicy "TCP" -IPProtocolMatchCondition TCP -PriorityValue8021Action 1&lt;BR /&gt;New-NetQosPolicy "UDP" -IPProtocolMatchCondition UDP -PriorityValue8021Action 1&lt;BR /&gt;New-NetQosTrafficClass "SMB"&amp;nbsp; -Priority 3&amp;nbsp; -BandwidthPercentage 60&amp;nbsp; -Algorithm ETS&lt;BR /&gt;New-NetQosTrafficClass "Cluster"&amp;nbsp; -Priority 1 -BandwidthPercentage 5 -Algorithm ETS&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# If VLANs are used, mark the egress traffic with the relevant VlanID:&lt;BR /&gt;Set-NetAdapterAdvancedProperty -Name &amp;lt;Network Adapter Name&amp;gt; -RegistryKeyword "VlanID" -RegistryValue &amp;lt;ID&amp;gt;&lt;BR /&gt;&lt;BR /&gt;# Enable Priority Flow Control (PFC) on a specific priority. Disable for others&lt;BR /&gt;Enable-NetQosFlowControl -Priority 5&lt;BR /&gt;Disable-NetQosFlowControl 0,1,2,3,4,6,7&lt;BR /&gt;&lt;BR /&gt;# Enable QoS on the relevant interface (named here: RoCE_1)&lt;BR /&gt;Enable-NetAdapterQos -InterfaceAlias "10GBPort1"&lt;BR /&gt;Enable-NetAdapterQos -InterfaceAlias "10GBPort2"&lt;BR /&gt;&lt;BR /&gt;# Enable Jumbo Frames auf Netwerkkarte aktivieren mit einen MTU von 9014&lt;BR /&gt;Set-NetAdapterAdvancedProperty -Name "10GBPort1" -RegistryKeyword '*JumboPacket' -RegistryValue '9014'&lt;BR /&gt;Set-NetAdapterAdvancedProperty -Name "10GBPort2" -RegistryKeyword '*JumboPacket' -RegistryValue '9014'&lt;BR /&gt;Ping -f -l 8000 192.168.0.2 # Jumbo Frame Test&lt;/P&gt;&lt;P&gt;Regards&lt;BR /&gt;M.Fuhr&lt;/P&gt;</description>
      <pubDate>Wed, 14 Feb 2018 14:04:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/storage-spaces-direct-with/m-p/6996990#M21130</guid>
      <dc:creator>HP-Schrauber</dc:creator>
      <dc:date>2018-02-14T14:04:12Z</dc:date>
    </item>
  </channel>
</rss>

