BladeSystem Server Blades
Showing results for 
Search instead for 
Did you mean: 

BL870C i2 & memory sizing/interleaving?

Trusted Contributor

BL870C i2 & memory sizing/interleaving?

Doug had a question regarding the Integrity blades:




I’m hoping someone on this list can help me.  I’ve recently had some questions regarding the internal workings of  the HP BL870Ci2  hardware and they have me a bit stumped.  I’m finding it difficult to find someone that can answer them appropriately.


The h/w questions all revolve around the amount of memory.  A change before my time brought the ram down to 24 gigs primarily for cost reasons.  The comment that came back is that we can’t create a BL870ci2 w/24 gigs using memory DIMMS of the same size which will cause the memory interleaving to perform poorly and result in reduced performance.  The memory size should be either 16 gigs or 32 gigs, not 24. 


While attempting to research this, I found out about two different memory usage paradigms:

  • Socket local memory in which processes are locked to specific cores/processors so they can access the memory local to that processor faster
  • Interleaved local memory in which the processor is configured to treat all memory equally regardless of location


The design doc is for generic blades; however, the vast majority of the systems that will be built will be large database and/or SAP related systems.  Based on that, I’m suspecting that the interleaved model will be better overall as the socket local memory seems designed for specific configurations and applications.


So, to the questions:

  1. Is the statement regarding memory interleaving and memory size on a bl870c i2 accurate?
  2. How can I tell which memory paradigm is in use on a BL870c i2 and how can I switch it if desired?
  3. Is my suspicion of ILM memory model being better for a generic build accurate?


Thanks for your time; I appreciate any help/tips/suggestions anyone can send my way.




Lidia replied:




BL870c i2 requires loading memory in quads. A 24 GB memory configuration cannot be achieved with the supported 4GB DIMMs or 8GB DIMMs.


The memory subsystem architecture is ccNUMA which means that loading processors without local memory will result in memory resources not being utilized (the BW associated with the memory controllers that do not have DIMMs loaded is wasted, the processors which have no local memory will see increased memory latency since they have to access the memory connected to another processor socket across QPI hops).


SLM will have the lowest memory latency. It is best if the applications are NUMA aware.




Any experience in this area and suggestions for Doug?