<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Performance of Virtual Connect Ethernet w/LACP in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097139#M14489</link>
    <description>I'm currently testing network performance pumping data to BL360Cs through a Virtual Connect Ethernet 1.21. The uplink to our network is a 2Gb LACP trunk with a Cisco Catalyst. Both its interfaces are Linked/Active. No teaming is done in the blades yet.&lt;BR /&gt;&lt;BR /&gt;If I pump data to a blade, it tops at 1Gb/s, everything is good since that's the maximum its NIC can take. But if I start pumping to another blade, then another one and so on, the throughput on all of them go down, always keeping a total sum of 1Gb/s. It's as if the LACP trunk is not able to go over 1Gb/s.&lt;BR /&gt;&lt;BR /&gt;The Cisco admin swears he did everything suggested in the VC docs. And yes, I'm using multiple servers as data source, so it's not a bottleneck on the source side. &lt;BR /&gt;&lt;BR /&gt;Any clues or suggestions before I open a call? &lt;BR /&gt;&lt;BR /&gt;Points will be awarded, thanks.</description>
    <pubDate>Tue, 11 Mar 2008 17:01:34 GMT</pubDate>
    <dc:creator>Olivier Masse</dc:creator>
    <dc:date>2008-03-11T17:01:34Z</dc:date>
    <item>
      <title>Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097139#M14489</link>
      <description>I'm currently testing network performance pumping data to BL360Cs through a Virtual Connect Ethernet 1.21. The uplink to our network is a 2Gb LACP trunk with a Cisco Catalyst. Both its interfaces are Linked/Active. No teaming is done in the blades yet.&lt;BR /&gt;&lt;BR /&gt;If I pump data to a blade, it tops at 1Gb/s, everything is good since that's the maximum its NIC can take. But if I start pumping to another blade, then another one and so on, the throughput on all of them go down, always keeping a total sum of 1Gb/s. It's as if the LACP trunk is not able to go over 1Gb/s.&lt;BR /&gt;&lt;BR /&gt;The Cisco admin swears he did everything suggested in the VC docs. And yes, I'm using multiple servers as data source, so it's not a bottleneck on the source side. &lt;BR /&gt;&lt;BR /&gt;Any clues or suggestions before I open a call? &lt;BR /&gt;&lt;BR /&gt;Points will be awarded, thanks.</description>
      <pubDate>Tue, 11 Mar 2008 17:01:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097139#M14489</guid>
      <dc:creator>Olivier Masse</dc:creator>
      <dc:date>2008-03-11T17:01:34Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097140#M14490</link>
      <description>Both are very good VC docs, you can check if there is any reference to your problem.&lt;BR /&gt;&lt;A href="http://h41267.www4.hp.com/eventpage.aspx?&amp;amp;eventid=NgA4ADkA&amp;amp;cc=uk" target="_blank"&gt;http://h41267.www4.hp.com/eventpage.aspx?&amp;amp;eventid=NgA4ADkA&amp;amp;cc=uk&lt;/A&gt;〈=en&lt;BR /&gt;&lt;A href="http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00814156/c00814156.pdf" target="_blank"&gt;http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00814156/c00814156.pdf&lt;/A&gt;</description>
      <pubDate>Tue, 11 Mar 2008 17:06:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097140#M14490</guid>
      <dc:creator>Raghuarch</dc:creator>
      <dc:date>2008-03-11T17:06:10Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097141#M14491</link>
      <description>Yes I read them and implemented most of the setup following the Mark Harpur cookbooks and the VC for the Cisco Admin whitepaper, and didn't see any reference to my problem.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Tue, 11 Mar 2008 17:11:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097141#M14491</guid>
      <dc:creator>Olivier Masse</dc:creator>
      <dc:date>2008-03-11T17:11:23Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097142#M14492</link>
      <description>Hands down, best document out there:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01386629/c01386629.pdf" target="_blank"&gt;http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01386629/c01386629.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;starting on page 14 about VC Uplink Load Balancing.&lt;BR /&gt;&lt;BR /&gt;if both VC links show active then you are properly forming an LACP channel.  It just might be that the Load Balancing Algorithm in use is limiting the traffic to one link.  This should be rare given the way VC load balances but it is not outside the realm of possibility.&lt;BR /&gt;&lt;BR /&gt;Hope this helps...&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 12 Mar 2008 21:17:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097142#M14492</guid>
      <dc:creator>HEM_2</dc:creator>
      <dc:date>2008-03-12T21:17:27Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097143#M14493</link>
      <description>Yep as mentioned earlier, I also read this doc before posting this.&lt;BR /&gt;&lt;BR /&gt;Me and the Cisco admin are still looking into this. We've tried different scenarios, and the bottleneck is definitely either in the VC itself or the LACP trunk.</description>
      <pubDate>Thu, 13 Mar 2008 13:08:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097143#M14493</guid>
      <dc:creator>Olivier Masse</dc:creator>
      <dc:date>2008-03-13T13:08:16Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097144#M14494</link>
      <description>Olivier:&lt;BR /&gt;&lt;BR /&gt;so if you go to Hardware Overview and into the detailed port statistics screen do you see traffic on both ports that are in the LACP channel?  Do both ports have the same LAG ID?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Mar 2008 13:43:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097144#M14494</guid>
      <dc:creator>HEM_2</dc:creator>
      <dc:date>2008-03-13T13:43:13Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097145#M14495</link>
      <description>Yes both ports have the same ID and I confirmed yesterday that the inOctets values are balanced (not equally but close) between the two links.</description>
      <pubDate>Thu, 13 Mar 2008 13:45:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097145#M14495</guid>
      <dc:creator>Olivier Masse</dc:creator>
      <dc:date>2008-03-13T13:45:58Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097146#M14496</link>
      <description>I was able to get a contact at HP and here is his explanation, which makes a lot of sense. The trunk was using ports in the same port group in the Catalyst and there is a limitation of 1Gb/s on some models. I will try fixing that tomorrow.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Quote:&lt;BR /&gt;&lt;BR /&gt;I suspect what you are seeing is inherent in the hardware design of the module.&lt;BR /&gt;Certain Cisco switch modules use ASIC's that are shared by multiple ports. This particular module uses 6 ASIC's of 8 ports each. They are designed for burst traffic and oversubscribe the bandwidth 8:1.&lt;BR /&gt;&lt;BR /&gt;Each 8 port group has a bandwidth of 1GB.&lt;BR /&gt;&lt;BR /&gt;What you are seeing is exactly what should be expected with the 2 LACP ports. Ports 5/25 and 5/26 are in the same port group.&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Mar 2008 18:14:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097146#M14496</guid>
      <dc:creator>Olivier Masse</dc:creator>
      <dc:date>2008-03-13T18:14:21Z</dc:date>
    </item>
    <item>
      <title>Re: Performance of Virtual Connect Ethernet w/LACP</title>
      <link>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097147#M14497</link>
      <description>Thread closed, thanks.</description>
      <pubDate>Thu, 13 Mar 2008 18:14:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/performance-of-virtual-connect-ethernet-w-lacp/m-p/5097147#M14497</guid>
      <dc:creator>Olivier Masse</dc:creator>
      <dc:date>2008-03-13T18:14:42Z</dc:date>
    </item>
  </channel>
</rss>

