<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Blade Vs Rack mounted Servers in Integrity Servers</title>
    <link>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147786#M3315</link>
    <description>I would like to have few strong reasons on why to choose blade servers instead rack mounted?&lt;BR /&gt;1) Density, 16 blades, 2 CPUs each, 4 cores each CPU in 10U of space.&lt;BR /&gt;2) Less cabling, 6 power cable per 16 servers, 10 network cables per 16 servers, etc.&lt;BR /&gt;3) More durable, external high-eficient power supplies and fans. More easy to replace.&lt;BR /&gt;&lt;BR /&gt;What are the limitations or demerits of blades instead rack servers apart from memory expansions&lt;BR /&gt;c-Class uses SAS disks like other Proliant, a half-height blade has 8 DIMM slots, so maximum 32 GB RAM. Yo can add 2 mezzanine for up to 6 extra network or fiber channel ports. I do not see any limitation here.&lt;BR /&gt;&lt;BR /&gt;What sought of resilience or backup plans needs to keep in-hand before or after blade serverâ  s introduction. &lt;BR /&gt;Using RDP software, you can program automatic OS installation and software distribution, and even a "per bay" installation. Remove a blade from a bay, insert another one and it will be configured in the same manner.&lt;BR /&gt;&lt;BR /&gt;How much power (watts) and cooling required for a full load (16 * Â½ blades or 8 full blades) &lt;BR /&gt;About 5000 W, depending on how many mezzanines, DIMMs, and switches on the back&lt;BR /&gt;&lt;BR /&gt;The discusion about realibility is a thought one, everyone will have arguments, but c-Class is a good design, the cooling system is unique and the administration is dead easy.&lt;BR /&gt;Also, check the new Virtual Connect modules, they make the SAN and LAN administration a lot easier.&lt;BR /&gt;</description>
    <pubDate>Wed, 20 Feb 2008 16:10:22 GMT</pubDate>
    <dc:creator>Víctor Cespón</dc:creator>
    <dc:date>2008-02-20T16:10:22Z</dc:date>
    <item>
      <title>Blade Vs Rack mounted Servers</title>
      <link>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147784#M3313</link>
      <description>Hi, &lt;BR /&gt;We planned to introduce HP C-class blade servers. I would like to have few strong reasons on why to choose blade servers instead rack mounted? &lt;BR /&gt;What are the limitations or demerits of blades instead rack servers apart from memory expansions. (Please address the max risk if any) &lt;BR /&gt;What sought of resilience or backup plans needs to keep in-hand before or after blade server’s introduction. &lt;BR /&gt;How much power (watts) and cooling required for a full load (16 * ½ blades or 8 full blades) &lt;BR /&gt;Any recommendations or best practices for blade servers. &lt;BR /&gt;Any merits or demerits using it with VMware ESX 3.5 &lt;BR /&gt;Are there any other best blade models than the c-class blades? &lt;BR /&gt;Could you please provide me the success ratio of using blade servers, also provide me the blade server’s forums if any? &lt;BR /&gt;I have seen few presentations over youtube and came to know IBM blades are more reliable and few vulnerable incidents expected with HP. Could you please provide me some recommendations or best practices to overcome those issues. &lt;BR /&gt;        &lt;A href="http://www.youtube.com/watch?v=pkYGQ7KjC7Y&amp;amp;feature=related" target="_blank"&gt;http://www.youtube.com/watch?v=pkYGQ7KjC7Y&amp;amp;feature=related&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 20 Feb 2008 14:15:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147784#M3313</guid>
      <dc:creator>Kameswar</dc:creator>
      <dc:date>2008-02-20T14:15:35Z</dc:date>
    </item>
    <item>
      <title>Re: Blade Vs Rack mounted Servers</title>
      <link>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147785#M3314</link>
      <description>Do you talk about blades in general or about Integrities in particular?&lt;BR /&gt;&lt;BR /&gt;(you placed the thread in the Integrity subforum)</description>
      <pubDate>Wed, 20 Feb 2008 14:58:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147785#M3314</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2008-02-20T14:58:07Z</dc:date>
    </item>
    <item>
      <title>Re: Blade Vs Rack mounted Servers</title>
      <link>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147786#M3315</link>
      <description>I would like to have few strong reasons on why to choose blade servers instead rack mounted?&lt;BR /&gt;1) Density, 16 blades, 2 CPUs each, 4 cores each CPU in 10U of space.&lt;BR /&gt;2) Less cabling, 6 power cable per 16 servers, 10 network cables per 16 servers, etc.&lt;BR /&gt;3) More durable, external high-eficient power supplies and fans. More easy to replace.&lt;BR /&gt;&lt;BR /&gt;What are the limitations or demerits of blades instead rack servers apart from memory expansions&lt;BR /&gt;c-Class uses SAS disks like other Proliant, a half-height blade has 8 DIMM slots, so maximum 32 GB RAM. Yo can add 2 mezzanine for up to 6 extra network or fiber channel ports. I do not see any limitation here.&lt;BR /&gt;&lt;BR /&gt;What sought of resilience or backup plans needs to keep in-hand before or after blade serverâ  s introduction. &lt;BR /&gt;Using RDP software, you can program automatic OS installation and software distribution, and even a "per bay" installation. Remove a blade from a bay, insert another one and it will be configured in the same manner.&lt;BR /&gt;&lt;BR /&gt;How much power (watts) and cooling required for a full load (16 * Â½ blades or 8 full blades) &lt;BR /&gt;About 5000 W, depending on how many mezzanines, DIMMs, and switches on the back&lt;BR /&gt;&lt;BR /&gt;The discusion about realibility is a thought one, everyone will have arguments, but c-Class is a good design, the cooling system is unique and the administration is dead easy.&lt;BR /&gt;Also, check the new Virtual Connect modules, they make the SAN and LAN administration a lot easier.&lt;BR /&gt;</description>
      <pubDate>Wed, 20 Feb 2008 16:10:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147786#M3315</guid>
      <dc:creator>Víctor Cespón</dc:creator>
      <dc:date>2008-02-20T16:10:22Z</dc:date>
    </item>
    <item>
      <title>Re: Blade Vs Rack mounted Servers</title>
      <link>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147787#M3316</link>
      <description>Depending on how you decide to configure the i/o modules, those 10 blade servers might not even need more than a handful of network cables coming-out the chassis.</description>
      <pubDate>Thu, 21 Feb 2008 01:50:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/blade-vs-rack-mounted-servers/m-p/4147787#M3316</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2008-02-21T01:50:20Z</dc:date>
    </item>
  </channel>
</rss>

