<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Using 10Gb uplinks in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151367#M15299</link>
    <description>We are considering abandoning the 1Gb uplinks from our VC modules to the network, and using instead, the CX4 10Gb uplinks.&lt;BR /&gt;&lt;BR /&gt;    Are any of you out there using this configuration, i.e. CX4 only.    We would be interested in any observations you might have.   Were there any limitations or considerations which had to be taken into account?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;    Is anyone mixing VMWare with other more general use servers in the same enclosure.   I am given to understand that VMWare has very specific requirements regarding vLAN tagging.   Is anyone running such an environment and can throw any light on any observations or restrictions they have encountered.&lt;BR /&gt;&lt;BR /&gt;thanks.&lt;BR /&gt;&lt;BR /&gt;Dave.</description>
    <pubDate>Mon, 19 Jan 2009 20:00:59 GMT</pubDate>
    <dc:creator>The Brit</dc:creator>
    <dc:date>2009-01-19T20:00:59Z</dc:date>
    <item>
      <title>Using 10Gb uplinks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151367#M15299</link>
      <description>We are considering abandoning the 1Gb uplinks from our VC modules to the network, and using instead, the CX4 10Gb uplinks.&lt;BR /&gt;&lt;BR /&gt;    Are any of you out there using this configuration, i.e. CX4 only.    We would be interested in any observations you might have.   Were there any limitations or considerations which had to be taken into account?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;    Is anyone mixing VMWare with other more general use servers in the same enclosure.   I am given to understand that VMWare has very specific requirements regarding vLAN tagging.   Is anyone running such an environment and can throw any light on any observations or restrictions they have encountered.&lt;BR /&gt;&lt;BR /&gt;thanks.&lt;BR /&gt;&lt;BR /&gt;Dave.</description>
      <pubDate>Mon, 19 Jan 2009 20:00:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151367#M15299</guid>
      <dc:creator>The Brit</dc:creator>
      <dc:date>2009-01-19T20:00:59Z</dc:date>
    </item>
    <item>
      <title>Re: Using 10Gb uplinks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151368#M15300</link>
      <description>hey dave,&lt;BR /&gt;i can't speak specifically to the coax CX4 connections but we use the 10gb XFP connections and they are only good to about 3.6Gbps maximum under observed load testing. CX4 can cover a maximum for 300meters from memory.&lt;BR /&gt;&lt;BR /&gt;We are mixing ESX with general servers in a couple of enclosures and they do need their own physical interfaces.  At a minimum you will need your shared uplinks for your normal app servers and direct bindings to physical interfaces for your ESX hosts - they can't be part of the shared uplink because of the way VMWare utilises VLAN tagging.&lt;BR /&gt;&lt;BR /&gt;To be honest, we only run dev/test ESX hosts in our chassis because of the limitation of the number of physical NICs you can present to a blade.  Add to that the need to seperate DATA/VMKERNEL/MGMT onto physically seperate chassis NIC interfaces makes it all the more complicated.&lt;BR /&gt;&lt;BR /&gt;The new Virtual Connect Flex-10 modules appear to overcome these problems and would be perfect for ESX.&lt;BR /&gt;&lt;BR /&gt;rgds,&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 20 Jan 2009 04:37:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151368#M15300</guid>
      <dc:creator>karim h</dc:creator>
      <dc:date>2009-01-20T04:37:17Z</dc:date>
    </item>
    <item>
      <title>Re: Using 10Gb uplinks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151369#M15301</link>
      <description>The HP quickspecs say only lengths of upto 15m are supported for the CX4 cables.&lt;BR /&gt;&lt;BR /&gt;They only provide part numbers for 0.5,1,3,15m lengths of CX4 cable. Even though previously they used to also show the part numbers for the 5,7 &amp;amp; 10m&lt;BR /&gt;And I dont think these parts have been discontinued so I wonder why they are not listed anymore.&lt;BR /&gt;&lt;BR /&gt;For Support I'd stick with -&lt;BR /&gt;HP .5m 10GbE CX4 Cable 444477-B21 &lt;BR /&gt;HP 1m 10GbE CX4 Cable 444477-B22 &lt;BR /&gt;HP 3m 10GbE CX4 Cable 444477-B23 &lt;BR /&gt;HP 15m 10GbE CX4 Cable 444477-B27 &lt;BR /&gt;</description>
      <pubDate>Tue, 20 Jan 2009 09:27:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151369#M15301</guid>
      <dc:creator>Adrian Clint</dc:creator>
      <dc:date>2009-01-20T09:27:56Z</dc:date>
    </item>
    <item>
      <title>Re: Using 10Gb uplinks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151370#M15302</link>
      <description>&lt;BR /&gt;I'm using the 10Gbe links and having blades running Windows2003 and VMware ESX in the same IP space.&lt;BR /&gt;&lt;BR /&gt;If I remember correctly, we have VC1 &amp;amp; 2 running a pair of CX4's providing our 'front' interfaces, bundles 1Gbe connections in VC5 &amp;amp; 6 we use for iSCSI/NFS and another pair of CX4's in VC7 &amp;amp; 8 providing interfaces for virtual machines only. Blades are BL460c's with a quad port mez card.&lt;BR /&gt;&lt;BR /&gt;Again, if I remember correctly, we use a shared uplink for the VC1/2 connectivity and present a VLAN directly to the vmware servers, so no vlan tagging is needed server side.&lt;BR /&gt;</description>
      <pubDate>Tue, 20 Jan 2009 14:52:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151370#M15302</guid>
      <dc:creator>Julian Stenning</dc:creator>
      <dc:date>2009-01-20T14:52:38Z</dc:date>
    </item>
    <item>
      <title>Re: Using 10Gb uplinks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151371#M15303</link>
      <description>Just an FYI.  If you want to get past the 3.5 GB limit you are at you must enable Jumbo Frames on the server and switch.  That should get you up into the 7 - 8 GB range.</description>
      <pubDate>Wed, 18 Feb 2009 16:04:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151371#M15303</guid>
      <dc:creator>Bill.Clark</dc:creator>
      <dc:date>2009-02-18T16:04:59Z</dc:date>
    </item>
    <item>
      <title>Re: Using 10Gb uplinks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151372#M15304</link>
      <description>Thanks for the information.</description>
      <pubDate>Thu, 20 Aug 2009 12:32:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/using-10gb-uplinks/m-p/5151372#M15304</guid>
      <dc:creator>The Brit</dc:creator>
      <dc:date>2009-08-20T12:32:17Z</dc:date>
    </item>
  </channel>
</rss>

