- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- BL460 Gen8 / BL420 Gen8 - Mezz2 needs a second CPU...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-16-2012 07:03 AM
11-16-2012 07:03 AM
BL460 Gen8 / BL420 Gen8 - Mezz2 needs a second CPU installed in order to be used?
Dimitris questioned the need for a second CPU for certain use cases in Gen8 blades:
***************
Please verify if this is something that competition has also or it is only HP.
As far as I know neither IBM nor DELL must have two CPUs in order to use the second mezz slot.
If this is only HP , then we have a handicap against competition...
***********
Pedrag noted:
**************
Information taken from quickspec for BL460c G8
Step 2: Choose Required Options (one of the following from each list unless otherwise noted) |
||
HP Processors |
Core Processor Option Kits |
|
Discussion from Vincent:
***************
The Sandy Bridge architecture puts the PCIe controller inside the CPU, but nothing prevents a system designer to route 2 PCIe slots to a single CPU. We HP do that in DL servers for example. It appears that Dell and IBM haven chosen that way for their blade servers as well (and I assume not use the PCIe lanes off the second CPU), while we have made the opposite choice. I guess that could give us a performance advantage in fully-loaded configs, but someone closer to Houston would have to explain why that choice was made.
******************
Explanation from James:
***************
Each SandyBridge processor has 40 PCIe Gen3 bus segments available for use. For the BL460c, they were allocated in the following manner for CPU0:
Mezz1 – x16
LOM – x8
ROC (Raid Controller) – x4
PCH-A (South Bridge) – x4
Ilo4 – x4
Total: x36 lanes. That left only x4 lanes available for Mezz2, which is woefully inadequate for I/O performance, so the best trade-off was to route x16 lanes to Mezz2 originating from CPU1 instead. This same tradeoff was made for the DL380 and DL360e, but NOT the DL360p. So if this is a deal breaker, the DL360p connects all the PCIe slots to CPU0.
For the BL460c CPU1:
Mezz2 – x16
Total: x16
Like Vincent mentioned below, the PCIe controller has been integrated into the processor and IBM and Dell are claiming they can access both expansion connectors from one processor. Kudos to both.
DELL was under similar connector restrictions as was HP when SandyBridge came out, but they decided to route ONLY x8 PCIe to Mezz1 and Mezz2.
IBM routed a x16 PCIe bus from CPU0 and a x8 from CPU1 to Mezz1 and a x16 from CPU1 and x8 from CPU0 to Mezz2. Very clever, but potentially confusing Mezz card options.
HP engineering decided to create a balanced architecture for the more likely dual processor SKU by connecting a x16 PCIe bus from each CPU to it’s respective Mezz connector. For I/O intensive applications (think IB and Fusion I/O) x16 PCIe will out-perform x8 PCIe. x16 PCIe connectivity will also outperform x8 PCIe when 20Gb and 40Gb Ethernet arrives.
Final note, if a customer is going with only 1 socket, the assumption is they don’t need all 16 cores. We do have 4 core HPC SKU’s that would give the customer 8 Cores in two sockets.
Hope this helps.
****************
Comments? Which way do you prefer?