<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Blade System Matrix and converged networks in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486447#M7117</link>
    <description>Hi&lt;BR /&gt;&lt;BR /&gt;Does anyone know if the HP Blade Systems Virtual connect modules (VCe and VCfc) would work in a seamless fashion with the HP 2408 FCoE Converged Network Switch?&lt;BR /&gt;&lt;BR /&gt;If so, does the HP Blade System Matrix support this setup? Essentially bringing it inline with the recent Cisco UCS systems...&lt;BR /&gt;&lt;BR /&gt;ta&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Wed, 26 Aug 2009 13:55:01 GMT</pubDate>
    <dc:creator>Justin Hannan</dc:creator>
    <dc:date>2009-08-26T13:55:01Z</dc:date>
    <item>
      <title>Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486447#M7117</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;Does anyone know if the HP Blade Systems Virtual connect modules (VCe and VCfc) would work in a seamless fashion with the HP 2408 FCoE Converged Network Switch?&lt;BR /&gt;&lt;BR /&gt;If so, does the HP Blade System Matrix support this setup? Essentially bringing it inline with the recent Cisco UCS systems...&lt;BR /&gt;&lt;BR /&gt;ta&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 26 Aug 2009 13:55:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486447#M7117</guid>
      <dc:creator>Justin Hannan</dc:creator>
      <dc:date>2009-08-26T13:55:01Z</dc:date>
    </item>
    <item>
      <title>Re: Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486448#M7118</link>
      <description>I didn't see why it wouldn't. So I had a quick look at the details. It does not support Access Gateway, which means I would double check if it supports NPIV and I cant find a decent spec/doc that says if it will or wont.</description>
      <pubDate>Wed, 26 Aug 2009 14:37:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486448#M7118</guid>
      <dc:creator>Adrian Clint</dc:creator>
      <dc:date>2009-08-26T14:37:25Z</dc:date>
    </item>
    <item>
      <title>Re: Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486449#M7119</link>
      <description>Hi Justin,&lt;BR /&gt;&lt;BR /&gt;It sounds like you may need some adapters for the switch; the Virtual Connect Ethernet module ports are limited to CX4 connectors, not SFP+.&lt;BR /&gt;&lt;BR /&gt;Other than that I don't see cause for concern.&lt;BR /&gt;&lt;BR /&gt;good luck</description>
      <pubDate>Wed, 26 Aug 2009 15:29:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486449#M7119</guid>
      <dc:creator>WFHC-WI</dc:creator>
      <dc:date>2009-08-26T15:29:39Z</dc:date>
    </item>
    <item>
      <title>Re: Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486450#M7120</link>
      <description>VC Flex-10 has SPF+</description>
      <pubDate>Thu, 27 Aug 2009 09:18:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486450#M7120</guid>
      <dc:creator>c3_1</dc:creator>
      <dc:date>2009-08-27T09:18:28Z</dc:date>
    </item>
    <item>
      <title>Re: Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486451#M7121</link>
      <description>That's correct... are you using VC-Enet 1/10GB modules or VC Flex-10?</description>
      <pubDate>Thu, 27 Aug 2009 14:07:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486451#M7121</guid>
      <dc:creator>WFHC-WI</dc:creator>
      <dc:date>2009-08-27T14:07:11Z</dc:date>
    </item>
    <item>
      <title>Re: Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486452#M7122</link>
      <description>Thanks for the replies. I essentially looking at a data center refresh for the company. The HP Blade System using VC Flex-10 modules as it's likely the c7000 would be using BL495 G6 for a fairly large VMware farm.&lt;BR /&gt;&lt;BR /&gt;I like the sound of the converged networks that the Cisco UCS offers so naturally am after seeing if the Blade System can do a similar thing.&lt;BR /&gt;&lt;BR /&gt;My main task at the moment is to compare traditional rack mount vs blades vs cisco usc for compute power, energy savings, cable savings etc.&lt;BR /&gt;&lt;BR /&gt;Eg. at the mo a c7000 with BL495G6 can provide me with 384 virtual CPUs (2 CPU x 6 Core x 16 Blades x 2 for vCPU), if that then amounted to 384 vm guests would a 10Gb flex-10 module be enough bandwidth for that many virtual servers.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 27 Aug 2009 14:14:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486452#M7122</guid>
      <dc:creator>Justin Hannan</dc:creator>
      <dc:date>2009-08-27T14:14:16Z</dc:date>
    </item>
    <item>
      <title>Re: Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486453#M7123</link>
      <description>Well depending on how you configure the network you could have upto 2x (6*10Gb + 2x 1Gb) bandwidth usuable out of the enclosure&lt;BR /&gt;= 124Gb = 300Mb per CPU&lt;BR /&gt;&lt;BR /&gt;The question is - is that enough for you?&lt;BR /&gt;&lt;BR /&gt;And thats "upto" - you probably would not want to configure it that way. And it would be knife edge on the config with no room for a failing VC module.&lt;BR /&gt;So more like 100Mb</description>
      <pubDate>Thu, 27 Aug 2009 17:28:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486453#M7123</guid>
      <dc:creator>Adrian Clint</dc:creator>
      <dc:date>2009-08-27T17:28:35Z</dc:date>
    </item>
    <item>
      <title>Re: Blade System Matrix and converged networks</title>
      <link>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486454#M7124</link>
      <description>&amp;gt;&amp;gt;if the HP Blade Systems Virtual connect modules (VCe and VCfc) would work in a seamless fashion with the HP 2408 FCoE Converged Network Switch?&lt;BR /&gt;&lt;BR /&gt;The uplinks coming out of VC aren't CEE, so while they would connect at 10GbE to the 2408, you wouldn't get CEE.&lt;BR /&gt;&lt;BR /&gt;Quoting from HP's site:&lt;BR /&gt;&amp;gt;&amp;gt;When the CEE standard does emerge â   and HP is helping to shape that standard today â   you can be assured the Virtual Connect family of products will support it&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I think Adrian's configuration is with one (or two) VC Flex-10 modules (am I right?)  With more modules (and more mezz cards in the blades), you could boost that bandwidth.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;BladeSystem Config: c7000, 6 x VC Flex-10, 16 x BL495 [each w/ 2 dual-port 10GbE mezz]&lt;BR /&gt;&lt;BR /&gt;Uplink bandwidth from enclosure:&lt;BR /&gt;6 x (6 x 10Gb) = 360Gb (or 180Gb, redudant)&lt;BR /&gt;Using Justin's bandwidth-per-vCPU rule, that's 180Gb/384vm = 468Mb/vm&lt;BR /&gt;&lt;BR /&gt;UCS Config: 5100, 2 x 2100 fabric extender, 8 x B200 servers&lt;BR /&gt;&lt;BR /&gt;Uplink bandwidth:&lt;BR /&gt;2 x (4 x 10Gb) = 80Gb (40Gb redundant)&lt;BR /&gt;&lt;BR /&gt;Using quad-core Xeon-based B200 M1 blades, you'd see&lt;BR /&gt;2 x 4 core x 8 blades x 2 for vCPU = 128Vcpus&lt;BR /&gt;&lt;BR /&gt;40Gb/128vm = 312Mb/vm&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;BTW, the "convergence" of Matrix isn't converged *fabrics*, it's converged resource -- compute, storage, management, etc. At the data center level (especially a new data center -- lucky you, Justin!) whether the connection is FCoE or token-ring probably isn't as critical as whether the bandwidth/compute/storage/whatever can be added when needed, carved up and deployed, and done in the most optimal manner within your budget/SLA.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 27 Aug 2009 22:16:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/blade-system-matrix-and-converged-networks/m-p/4486454#M7124</guid>
      <dc:creator>Daniel Bowers</dc:creator>
      <dc:date>2009-08-27T22:16:44Z</dc:date>
    </item>
  </channel>
</rss>

