<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VC throughput in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615402#M9954</link>
    <description>Will you have throughput problems?  It depends - what sort of throughput requirements do your systems have?&lt;BR /&gt;&lt;BR /&gt;If you have 16 BL460 G6's, which have 10 Gbit LOMs, any particular reason you went with the VC 1/10 Ethernet module rather than the VC Flex-10 module or the ProCurve 6120 switch?</description>
    <pubDate>Mon, 12 Apr 2010 15:41:57 GMT</pubDate>
    <dc:creator>rick jones</dc:creator>
    <dc:date>2010-04-12T15:41:57Z</dc:date>
    <item>
      <title>VC throughput</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615400#M9952</link>
      <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;I have the following items below.&lt;BR /&gt;&lt;BR /&gt;16units bl460c G6&lt;BR /&gt;VC 1/10G-F Ethernet.&lt;BR /&gt;1GB switch&lt;BR /&gt;&lt;BR /&gt;If I point all 16 units of bl460c G6 NIC1 to VC 1GB port 1, Will I have a throughput problem? How can I solve this in order to get all 16 units of bl460c G6 NIC1 not to have throughput problems?&lt;BR /&gt;</description>
      <pubDate>Sun, 11 Apr 2010 09:06:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615400#M9952</guid>
      <dc:creator>Jason Ng Teng Po</dc:creator>
      <dc:date>2010-04-11T09:06:42Z</dc:date>
    </item>
    <item>
      <title>Re: VC throughput</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615401#M9953</link>
      <description>Create an UplinkSet from your 1/10G-F Ethernet module.    An Uplinkset is similar to a trunk to connect your VC module to the external network.&lt;BR /&gt;&lt;BR /&gt;    As for your blades, they all get 1G to the ENet module (separate hardwired paths)&lt;BR /&gt;&lt;BR /&gt;Dave.</description>
      <pubDate>Sun, 11 Apr 2010 23:12:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615401#M9953</guid>
      <dc:creator>The Brit</dc:creator>
      <dc:date>2010-04-11T23:12:58Z</dc:date>
    </item>
    <item>
      <title>Re: VC throughput</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615402#M9954</link>
      <description>Will you have throughput problems?  It depends - what sort of throughput requirements do your systems have?&lt;BR /&gt;&lt;BR /&gt;If you have 16 BL460 G6's, which have 10 Gbit LOMs, any particular reason you went with the VC 1/10 Ethernet module rather than the VC Flex-10 module or the ProCurve 6120 switch?</description>
      <pubDate>Mon, 12 Apr 2010 15:41:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615402#M9954</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2010-04-12T15:41:57Z</dc:date>
    </item>
    <item>
      <title>Re: VC throughput</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615403#M9955</link>
      <description>You can etherchannel / 802.3ad link aggregate more than one VC uplink to your switch to get upto a 4GB or maybe 6GB link from VC to switch.&lt;BR /&gt;&lt;BR /&gt;Wether you have a problem or not is a question for your network traffic requirements of the blades.</description>
      <pubDate>Wed, 14 Apr 2010 17:10:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-throughput/m-p/4615403#M9955</guid>
      <dc:creator>Adrian Clint</dc:creator>
      <dc:date>2010-04-14T17:10:19Z</dc:date>
    </item>
  </channel>
</rss>

