BladeSystem - General
1748035 Members
4622 Online
108757 Solutions
New Discussion

BL460 Gen8 / BL420 Gen8 - Mezz2 needs a second CPU installed in order to be used?

 
chuckk281
Trusted Contributor

BL460 Gen8 / BL420 Gen8 - Mezz2 needs a second CPU installed in order to be used?

Dimitris questioned the need for a second CPU for certain use cases in Gen8 blades:

 

***************

 

Please verify if this is something that competition has also or it is only HP.

As far as I know neither IBM nor DELL must have two CPUs in order to use the second mezz slot.

If this is only HP , then we have a handicap against competition...

 

***********

 

Pedrag noted:

 

**************

 

Information taken from quickspec for BL460c G8

 

Step 2: Choose Required Options (one of the following from each list unless otherwise noted)

HP Processors

Core Processor Option Kits
NOTE: The BL460c Gen8 supports one or two processors.
NOTE: All configure-to-order processor kits (i.e. xxxxxx-L21) contain one (1) processor.
NOTE: If two processors are desired, select one xxxxxx-L21 here in Step 2 and one xxxxxx-B21 in Step 3.
NOTE: The BL460c Gen8 includes two I/O mezzanine expansion slots. A processor must be installed in processor slot 1 for access to the first mezzanine expansion slot (expansion slot 1). A processor must be installed in processor slot 2 for access to the second mezzanine expansion slot (expansion slot 2).
NOTE: All processors within the server must be identical.
NOTE: All processors support Intel Hyper-Threading and Intel Turbo Boost Technologies except the E5-2603 and E5-2609.
NOTE: The letter "L" following the model number indicates denotes lower wattage.
NOTE: The processor model as well as the memory configuration determines the maximum speed memory can operate. Please see the see the "Memory" section later in this document.
NOTE: The Intel Xeon E5-2620 processor does not support DIMMs at 1.35V. Using the HP RBSU, 1.35V DIMMs can be changed to operate at 1.5V.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Discussion from Vincent:

 

***************

 

The Sandy Bridge architecture puts the PCIe controller inside the CPU, but nothing prevents a system designer to route 2 PCIe slots to a single CPU. We HP do that in DL servers for example. It appears that Dell and IBM haven chosen that way for their blade servers as well (and I assume not use the PCIe lanes off the second CPU), while we have made the opposite choice. I guess that could give us a performance advantage in fully-loaded configs, but someone closer to Houston would have to explain why that choice was made.

 

******************

 

Explanation from James:

 

***************

 

Each SandyBridge processor has 40 PCIe Gen3 bus segments available for use.  For the BL460c, they were allocated in the following manner for CPU0:

Mezz1 – x16

LOM – x8

ROC (Raid Controller) – x4

PCH-A (South Bridge) – x4

Ilo4 – x4

Total: x36 lanes.  That left only x4 lanes available for Mezz2, which is woefully inadequate for I/O performance, so the best trade-off was to route x16 lanes to Mezz2 originating from CPU1 instead.  This same tradeoff was made for the DL380 and DL360e, but NOT the DL360p.  So if this is a deal breaker, the DL360p connects all the PCIe slots to CPU0.

 

For the BL460c CPU1:

Mezz2 – x16

Total: x16

 

Like Vincent mentioned below, the PCIe controller has been integrated into the processor and IBM and Dell are claiming they can access both expansion connectors from one processor.  Kudos to both.

DELL was under similar connector restrictions as was HP when SandyBridge came out, but they decided to route ONLY x8 PCIe to Mezz1 and Mezz2.

IBM routed a x16 PCIe bus from CPU0 and a x8 from CPU1 to Mezz1 and a x16 from CPU1 and x8 from CPU0 to Mezz2.  Very clever, but potentially confusing Mezz card options.

HP engineering decided to create a balanced architecture for the more likely dual processor SKU by connecting a x16 PCIe bus from each CPU to it’s respective Mezz connector.  For I/O intensive applications (think IB and Fusion I/O) x16 PCIe will out-perform x8 PCIe.  x16 PCIe connectivity will also outperform x8 PCIe when 20Gb and 40Gb Ethernet arrives.

 

Final note, if a customer is going with only 1 socket, the assumption is they don’t need all 16 cores. We do have 4 core HPC SKU’s that would give the customer 8 Cores in two sockets.

 

Hope this helps.

 

****************

 

Comments? Which way do you prefer?