<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Poor network performance with ML 350's in ProLiant Servers (ML,DL,SL)</title>
    <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090455#M71081</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I have two HP ML350 servers, as follows:&lt;BR /&gt;&lt;BR /&gt;"S1" is an ML350 G4 with 3GHz CPU, 2GB RAM, Ultra320 15K disks in RAID 5 and built in gigabit NIC running SBS 2003 Premium with SP2.&lt;BR /&gt;&lt;BR /&gt;"S2" is a brand new ML350 G5 with quad core CPU, 2GB RAM, SAS 10K disks in RAID 5 and built in gigabit NIC running Windows Server 2003 Standard R2 SP2.&lt;BR /&gt;&lt;BR /&gt;I have been doing some testing on the network and the performance is nowhere near what I would hope for and expect it to be.  I used the following configurations and tests:&lt;BR /&gt;&lt;BR /&gt;CONFIG1: Both servers connected to unbranded 100Mbps switch at 100Mbps&lt;BR /&gt;&lt;BR /&gt;CONFIG2: Both servers connected together with a cross-over cable at 1Gbps&lt;BR /&gt;&lt;BR /&gt;CONFIG3: Both servers connected to a brand new Netgear 1Gbps switch at 1Gbps&lt;BR /&gt;&lt;BR /&gt;TEST1: From the console of S1, use Windows Explorer to copy an i386 folder from S2 to S1.&lt;BR /&gt;&lt;BR /&gt;TEST2: From the console of S2, use Windows Explorer to copy an i386 folder from S1 to S2.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;CONFIG1 TEST1 throughput = 26 Mbps&lt;BR /&gt;CONFIG1 TEST2 throughput = 46 Mbps&lt;BR /&gt;&lt;BR /&gt;CONFIG2 TEST1 throughput = 29 Mbps&lt;BR /&gt;CONFIG2 TEST2 throughput = 67 Mbps&lt;BR /&gt;&lt;BR /&gt;CONFIG3 TEST1 throughput = 42Mbps&lt;BR /&gt;CONFIG3 TEST2 throughput = 72Mbps&lt;BR /&gt;&lt;BR /&gt;The tests were performed after cold boots on both servers, with no other devices connected to the network.  TCP offload engine is disabled on S2.  Both NIC's and switch ports were set to "auto" for speed and duplex.  With the gigabit switch and the crossover cable, both NIC's reported that they were running at 1Gbps.&lt;BR /&gt;&lt;BR /&gt;I can live with the performance on the 100Mbps switch, but I can't understand why increasing the speed of the network to 1Gbps (a tenfold theoretical increase) results in a miserable performance increase.&lt;BR /&gt;&lt;BR /&gt;Can anyone suggest where to start troubleshooting this?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Dave.</description>
    <pubDate>Tue, 23 Oct 2007 05:05:12 GMT</pubDate>
    <dc:creator>David B Walsh</dc:creator>
    <dc:date>2007-10-23T05:05:12Z</dc:date>
    <item>
      <title>Poor network performance with ML 350's</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090455#M71081</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I have two HP ML350 servers, as follows:&lt;BR /&gt;&lt;BR /&gt;"S1" is an ML350 G4 with 3GHz CPU, 2GB RAM, Ultra320 15K disks in RAID 5 and built in gigabit NIC running SBS 2003 Premium with SP2.&lt;BR /&gt;&lt;BR /&gt;"S2" is a brand new ML350 G5 with quad core CPU, 2GB RAM, SAS 10K disks in RAID 5 and built in gigabit NIC running Windows Server 2003 Standard R2 SP2.&lt;BR /&gt;&lt;BR /&gt;I have been doing some testing on the network and the performance is nowhere near what I would hope for and expect it to be.  I used the following configurations and tests:&lt;BR /&gt;&lt;BR /&gt;CONFIG1: Both servers connected to unbranded 100Mbps switch at 100Mbps&lt;BR /&gt;&lt;BR /&gt;CONFIG2: Both servers connected together with a cross-over cable at 1Gbps&lt;BR /&gt;&lt;BR /&gt;CONFIG3: Both servers connected to a brand new Netgear 1Gbps switch at 1Gbps&lt;BR /&gt;&lt;BR /&gt;TEST1: From the console of S1, use Windows Explorer to copy an i386 folder from S2 to S1.&lt;BR /&gt;&lt;BR /&gt;TEST2: From the console of S2, use Windows Explorer to copy an i386 folder from S1 to S2.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;CONFIG1 TEST1 throughput = 26 Mbps&lt;BR /&gt;CONFIG1 TEST2 throughput = 46 Mbps&lt;BR /&gt;&lt;BR /&gt;CONFIG2 TEST1 throughput = 29 Mbps&lt;BR /&gt;CONFIG2 TEST2 throughput = 67 Mbps&lt;BR /&gt;&lt;BR /&gt;CONFIG3 TEST1 throughput = 42Mbps&lt;BR /&gt;CONFIG3 TEST2 throughput = 72Mbps&lt;BR /&gt;&lt;BR /&gt;The tests were performed after cold boots on both servers, with no other devices connected to the network.  TCP offload engine is disabled on S2.  Both NIC's and switch ports were set to "auto" for speed and duplex.  With the gigabit switch and the crossover cable, both NIC's reported that they were running at 1Gbps.&lt;BR /&gt;&lt;BR /&gt;I can live with the performance on the 100Mbps switch, but I can't understand why increasing the speed of the network to 1Gbps (a tenfold theoretical increase) results in a miserable performance increase.&lt;BR /&gt;&lt;BR /&gt;Can anyone suggest where to start troubleshooting this?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Dave.</description>
      <pubDate>Tue, 23 Oct 2007 05:05:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090455#M71081</guid>
      <dc:creator>David B Walsh</dc:creator>
      <dc:date>2007-10-23T05:05:12Z</dc:date>
    </item>
    <item>
      <title>Re: Poor network performance with ML 350's</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090456#M71082</link>
      <description>Hello David,&lt;BR /&gt;&lt;BR /&gt;Many are Baffled by this; TOE, RSS &amp;amp; NetDMA seems to do more harm &amp;amp; benefit.&lt;BR /&gt;Refer some of these:&lt;BR /&gt;&lt;A href="http://support.microsoft.com/kb/912222" target="_blank"&gt;http://support.microsoft.com/kb/912222&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://support.microsoft.com/kb/936594" target="_blank"&gt;http://support.microsoft.com/kb/936594&lt;/A&gt; (Important hotfix which in NOT included in SP2)&lt;BR /&gt;&lt;BR /&gt;&amp;amp; Very interestingly :&lt;BR /&gt;&lt;A href="http://www.microsoft.com/technet/community/columns/cableguy/cg0606.mspx" target="_blank"&gt;http://www.microsoft.com/technet/community/columns/cableguy/cg0606.mspx&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Which Read:&lt;BR /&gt;*To ensure that TCP Chimney Offload will not reduce the capabilities of existing and future Microsoft WindowsÂ® network stacks, TCP Chimney Offload will not offload a connection if the network adapter does not support a needed processing capability, such as Internet Protocol security (IPsec) cryptographic processing.&lt;BR /&gt;&lt;BR /&gt;*If a network adapter supports Receive-side Scaling, the Scalable Networking Pack uses this capability across all TCP connections, including connections that are offloaded through TCP Chimney Offload.(I am not sure, how RSS reacts when TOE is MANUALLY Disabled)&lt;BR /&gt;&lt;BR /&gt;*The Scalable Networking Pack invokes NetDMA when it detects supporting hardware. If the Scalable Networking Pack detects that the hardware can support both NetDMA and TCP Chimney Offload, NetDMA is disabled and TCP Chimney Offload remains enabled.(Same here)&lt;BR /&gt;&lt;BR /&gt;Regards.</description>
      <pubDate>Tue, 23 Oct 2007 13:40:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090456#M71082</guid>
      <dc:creator>James ~ Happy Dude</dc:creator>
      <dc:date>2007-10-23T13:40:00Z</dc:date>
    </item>
    <item>
      <title>Re: Poor network performance with ML 350's</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090457#M71083</link>
      <description>A nice test!&lt;BR /&gt;&lt;BR /&gt;So... &lt;BR /&gt;First of all please disable also the RSS in the NCU (should be near the TOE).&lt;BR /&gt;Could you try copying larger Files (I mean a big .iso) to check how the speed is then.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Oct 2007 14:12:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090457#M71083</guid>
      <dc:creator>Mi6t0</dc:creator>
      <dc:date>2007-10-23T14:12:25Z</dc:date>
    </item>
    <item>
      <title>Re: Poor network performance with ML 350's</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090458#M71084</link>
      <description>hi&lt;BR /&gt;&lt;BR /&gt;TOE - RSS - TCP Chimney what a combination that is causing a lot of headaches.&lt;BR /&gt;&lt;BR /&gt;- So you already disable TOE good.&lt;BR /&gt;&lt;BR /&gt;- what about RSS&lt;BR /&gt;&lt;A href="http://support.microsoft.com/default.aspx?scid=kb;EN-US;927695" target="_blank"&gt;http://support.microsoft.com/default.aspx?scid=kb;EN-US;927695&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;- disable TCP Chimney&lt;BR /&gt;&lt;BR /&gt;"Netsh int ip set chimney DISABLED"&lt;BR /&gt;&lt;A href="http://msexchangeteam.com/archive/2007/07/18/446400.aspx" target="_blank"&gt;http://msexchangeteam.com/archive/2007/07/18/446400.aspx&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;and check this thread &lt;BR /&gt;&lt;A href="http://urlao.com/toe" target="_blank"&gt;http://urlao.com/toe&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 24 Oct 2007 01:19:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090458#M71084</guid>
      <dc:creator>KarloChacon</dc:creator>
      <dc:date>2007-10-24T01:19:15Z</dc:date>
    </item>
    <item>
      <title>Re: Poor network performance with ML 350's</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090459#M71085</link>
      <description>Thanks for the replies and help so far.&lt;BR /&gt;&lt;BR /&gt;Not sure of TOE settings and the Microsoft Scalable Networking Pack are to blame in this situation, although reading through the links provided this Scalable Networking Pack implementation seems to be an absolute disgrace (heads should roll!!!).&lt;BR /&gt;&lt;BR /&gt;Going back to my original post, S1 is an ML350 G4 so I don't think the NIC in this server has the capability at the hardware level to utilise the TOE and SNP, cough, "enhancements" (there are no mentions of things like RSS or TOE in the HP Network Configuration Utility).  Nevertheless, I have done the following on S1:&lt;BR /&gt;&lt;BR /&gt;NETSH INT IP SET CHIMNEY DISABLED&lt;BR /&gt;&lt;BR /&gt;Modified the following registry keys:&lt;BR /&gt;&lt;BR /&gt;EnableTCPChimney=0&lt;BR /&gt;EnableTCPA=0&lt;BR /&gt;EnableRSS=0&lt;BR /&gt;&lt;BR /&gt;The NIC in S1 is a "NC7761 Gigabit Server Adapter" (33MHz, 32-bit, driver version 7.103.0.0)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;S2 is a brand new ML350 G5, so I have done the following:&lt;BR /&gt;&lt;BR /&gt;NETSH INT IP SET CHIMNEY DISABLED&lt;BR /&gt;&lt;BR /&gt;Modified the following registry keys:&lt;BR /&gt;&lt;BR /&gt;EnableTCPChimney=0&lt;BR /&gt;EnableTCPA=0&lt;BR /&gt;EnableRSS=0&lt;BR /&gt;&lt;BR /&gt;The NIC in S2 is a "NC373i Multifunction Gigabit Server Adapter" (133MHz, 64-bit, driver version 3.4.10.0)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;After making these changes I repeated my test on the gigabit switch (the servers are now in production and connected to this switch).  There was no improvement.&lt;BR /&gt;&lt;BR /&gt;I also repeated the test using a circa 500MB ISO file instead of an i386 folder.  The throughput increased to aroun 200Mbps for this test.&lt;BR /&gt;&lt;BR /&gt;I would like to know the following:&lt;BR /&gt;&lt;BR /&gt;1. How can I be sure that TOE and Microsoft SNP are fully disabled on these servers and as such can be ruled out of the equation.&lt;BR /&gt;&lt;BR /&gt;2. What sort of throughput are other forum users seeing on their networks with similar servers running at 1Gbps?&lt;BR /&gt;&lt;BR /&gt;3. Does between 40Mbps and 70Mbps throughput for copying an i386 folder between these servers using a 1Gbps managed switch seem acceptable? (my gut feeling is no).&lt;BR /&gt;&lt;BR /&gt;4. Does around 200Mbps throughput for copying a 500MB ISO image between these servers using a 1Gbps managed switch seem acceptable? (my gut feeling is still no, but getting much closer to what we can probably expect in practice).&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Dave.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 24 Oct 2007 12:04:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090459#M71085</guid>
      <dc:creator>David B Walsh</dc:creator>
      <dc:date>2007-10-24T12:04:47Z</dc:date>
    </item>
    <item>
      <title>Re: Poor network performance with ML 350's</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090460#M71086</link>
      <description>Dave, I like networking very much and themes like theese are one of my favourite :)&lt;BR /&gt;&lt;BR /&gt;So... I have to tell that I expected the better speed with the ISO like perhaps most of the users.&lt;BR /&gt;A point that we didn't discuss are the cables. Did you try with different cables? How long are yours, what kind of model?&lt;BR /&gt;&lt;BR /&gt;About the TOE - to check if it is disabled, you can open the NCU and see the settings. In fact, when it is disabled, CPU should have a little more load. &lt;BR /&gt;&lt;BR /&gt;I think that in this case 1 Gbit is not so easy to be reached. But a transfer about 300-400 mbit is in my opinion acceptable.&lt;BR /&gt;&lt;BR /&gt;Regards, Mi6t0</description>
      <pubDate>Thu, 25 Oct 2007 13:44:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/poor-network-performance-with-ml-350-s/m-p/4090460#M71086</guid>
      <dc:creator>Mi6t0</dc:creator>
      <dc:date>2007-10-25T13:44:40Z</dc:date>
    </item>
  </channel>
</rss>

