BladeSystem - General
Showing results for 
Search instead for 
Did you mean: 

Discussion - Blade servers or Rack servers better for virtualisation.

Occasional Advisor

Discussion - Blade servers or Rack servers better for virtualisation.

Hi Guys

What are your thoughts about blade servers for virtualisation.
do you think think things like virtual connect add an extra layer of complexity etc?
are blade servers power hungry
etc etc it will be good to know what various people here think of blades?
Michal Kapalka (mikap)
Honored Contributor

Re: Discussion - Blade servers or Rack servers better for virtualisation.


if you ask google "Blades vs. rack servers"

you will get lot of links for example :,289483,sid94_gci1350764,00.html

in my opinion, it's depend on the requiremnets from the customers, how much money have for the solution.

I prefer rack servers instead of blades.


Re: Discussion - Blade servers or Rack servers better for virtualisation.

Here's my opinion.
And it's from the Service Engineers point of view.

Overall power consumption.
Overall Cooling.

Uplink bandwith
If you need to replace the midplane, all servers in the enclosures must be shut down during service time.
Propriorty I/O modules.

Lets compare a half hight blade to a Pizzabox (1U). BL460c vs. DL360
The system boards is basically the same.
Same chipset, CPU's, Memory, 2 embedded NIC's, ILo, embedded Smart Array controller.
DL got 2 PCI slots
BL got 2 mezzanin slots (PCIe), but propriorty.
DL G5, G6 hold 6 HDD's in the front.
BL only 2 HDD's
DL optional optical drive and Floppy.
BL USB throug the front connector or shared through the Onboard Administrator (OA).
BL got an internal USB connector for USB dongles.
DL - Don't know.
DL got 2 PSU's (2nd. optional) and it got fans.
BL got shared power supplies and fans.

A C7000 blade enclosure, hold max. 16 BL460c's, in 10 Units.
16 BL's vs. 10 DL's.
6 PSU's vs 20 PSU's.
Also the ineterconnect modules get power from the Blade enclosure.
2 or 8 power cords vs 20 power cords.
Depend if you choose 3 phase or single phase PDU in the plade enclosure.
0 LAN cables vs. 30 (2 NIC + ILo)
0 Fiber cables vs. 20.

Lets try to configure a RACK, with 1 EVA4400
10 servers, 2 SAN fabrics and 2 LAN switches.

The EVA, got 2 powercords, out from the rack.
It got PDU's in the bottom of the RACK, it got PDU extensions mounted on the sides of the RACK. 2 power cords ofr the controller enclosure. 2 power Cords pr disk shelf.

DL's + EVA4400:
An EVA4400 with 4 HDD shelfs = 10 Power cords.
10 DL360's with redundant PSU = 20 Power cords.
2 Brocade SAN switches = 4 power cords
2 Cisco Switches = 4 (2)? power cords
Thats a total of 38 (36) Power cords.
+ Power for the PDU's and PDU extentions.

C7000 + EVA4400:
EVA 10 power cords.
C7000 6 power cords. or 2 x 3 phase power cables.
Brocade 0 power cords
Cisco 0 power cords.
thats a total of 16 power cords max.

LAN Cables 2 embedded NIC's + ILO
DL = 30 LAN cables + 2 for the brocade switches.
BL 0 LAN cables + 2 for the brocade and 2 for the OA's
Thats 4 vs 32 + 1 for the EVA, in both cases.
DL's 20 + 4 for the EVA.
BL's 0 + 4 for the EVA.

Space used 10 servers + EVA including LAN + SAN switches.
10 DL's = 10 units
EVA 10 units
2 brocade = 2 x 1 unit
2 Cisco = 2 x 1 unit?
+ 2 x 2 units for PDU's to at least the EVA.
Total 26 units.

C7000 Enclosure = 10 units.
Including 10 servers + 2 Cisco + 2 Brocade
EVA = 10 units
PDU's = 2 units
=22 units

With the blades you can add 6 more servers without using more space or cables.

Lets take another solution.
You want to have as many servers as possible in a 42 unit rack.
You have your LAN and SAN switches outside the RACK.

42 DL's in 42 untis.
= 84 power cords
= 126 LAN cables
= 84 Fiber cables.
Total of 294 cables
Witch you need to install, document and maintain.

42 Units, can put 4 C7000 in it (Not recommened i believe)
4 x 16 Blades = 64 servers!
Power = 8 x 3 phase cables.
LAN cables minimum 18 out from the RACK
1 per. LAN switch + 1 per Brocade + 2 for the OA's.
Minimum 8 Fiber cables.
That's minimum 34 cables comin out of the RACK.
If you choose Virtual Connect for the, you cut the minimum by 8 cables.
And you will ned 3 x 1 meter LAN cables for linking the enclosure's

When I say minimum, I limit the band with on the switches, to 1 uplink per module.
When you choose to use 1 more uplink per switch you will increse the number of cables with 8.
And if you not happy with the bandwith of the uplinks, you can choose pass through modules, and then you will save 40 cables for ILo only + the power / power cables.

Also if you want more NIC's, you can put 2 x 4 port nic in the DL = 10 NIC's
in the BL you can put 1 x 2 port + 1 x 4 port = total of 8 NIC's.

If the uplink band with on the switches / Virtual connect modules, then it can save you a lot of cabling, documentation and money.

I will claim, that VC make it simpler and more felxible.


Adrian Clint
Honored Contributor

Re: Discussion - Blade servers or Rack servers better for virtualisation.

Blade servers for virtualization.... well nearly everyone who buys blades is virtualizing. But people virtualizing are not always buying blades ... usually because they are not comfortable with the format.
Or because they have other pre-conceptions like they consume more power and "difficult" to manage.
Blades consume less power than equivalent rackmount servers fact!
Yes Virtual Connect will add a layer of complexity to a design..but what it adds in ease of management and server recovery in the future can quickly outweigh this.
Occasional Advisor

Re: Discussion - Blade servers or Rack servers better for virtualisation.

Hi Guys
really good . informative and comprehensive answers/
thanks alot
David Claypool
Honored Contributor

Re: Discussion - Blade servers or Rack servers better for virtualisation.

Since by definition almost a virtualization host needs to have the utmost in reliability, the redundancy features of a c7000 enclosure for power and cooling are ideal--beyond what simple redundant power supplies can do for you in a rack server.

Additionally, since also almost by definition a VM host will be connected to a SAN, you can save a LOT of money when using a BladeSystem Fibre Channel interconnect because you don't have to buy transceivers for every server, just for the uplinks from the interconnect.

Virtual Connect Flex-10 provides additional savings and flexibility. Because of the multiple networks needed in a VM scenario (production, vCenter, vMotion, etc), Virtual Connect Flex-10 lets you define up to 8 distinct networks using 2 interconnects; in order to get that same capability you would need 8 separate interconnects using traditional switches. Finally, each of those networks can utilize a custom allocation of bandwidth up to a 10Gb total in increments of 100Mb, allowing you to allocate more or less bandwidth depending on the need of the interface (e.g. 500Mb for the vCenter network). Using traditional switching means all interfaces have to be at the same speed.

As far as uplink bandwidth goes, all servers get connected to an interconnect of some sort, whether elsewhere in some other rack or in the enclosure. That first connection is not going to have an equal number of gozintos to gozoutofs, so you always are going to have an oversubscription situation, regardless of where the interconnect lives. However, with VC Flex-10, there is a downlink to the server running at 10Gb for each of the up to 16 servers, and the interconnect module has 5 (or 6 if you don't use the internal bridge between modules) uplinks of 10Gb each; that lets you minimize possible oversubscription issues to approximately a 3:1 ratio, much better than most rack-based switches that might have 24 or 48 ports and only 1 or 2 uplinks running from them.

BladeSystem is ideal for virtualization:

- reliability
- cost savings
- flexibility designed for multiple networks