Apollo
1754392 Members
2785 Online
108813 Solutions
New Discussion

Is there a chance 2 NVMe SSDs limit per 2000 Gen10+ node will be eliminated?

 
SOLVED
Go to solution
Almantas Klimas
Occasional Advisor

Is there a chance 2 NVMe SSDs limit per 2000 Gen10+ node will be eliminated?

Apollo 2000 Gen10+ with 4 nodes, 6 disk slots each, 2xAMD CPU, would be the ideal and compact HW to run as CDNs nodes for video streaming, and for app databases in the data center.  But one blocking limitation - 6 disks per node can only be legacy SAS/SATA drives, these are no longer used in modern DC.  For NVMe, unfortunatelly, only 2 SSDs per node are supported, other 4 disk slots stay empty. For a CDN node that provides 200Gbps streaming output, 6 NVMe PCI4 based SSD disks are a minimum to keep 200Gbps outgoing data flow sustained. Same for using the nodes as app databases, etc. - the 2 NVMe disks only out of 6 slots is blocking limitation to use the Apollo platform in much wider usage scenarios.

Any chances the NVMe restriction to 2 disks only will be lifted? Great platform, but this is blocking limitation.  Technically, shall be no such restriction as 2 CPUs have plenty of PCI v4 lanes to support 6 NVMe SSDs. We unfortunatelly bought DL325G10+ and spent x2 rack units per server instance just because Apollo 2000 G10+ cannot support more than 2 NVMe SSDs per server node. Why this 2 NVMe SSDs only on server with 6 physical slots tragical restriction was introduced and can we expect it to be eliminated, so that 6 NVMe disks will be supported?

3 REPLIES 3
sudhirsingh
HPE Pro

Re: Is there a chance 2 NVMe SSDs limit per 2000 Gen10+ node will be eliminated?

This is by design.

I am not sure but Gen10 Plus should have support for more then 2 NVMe drives, since R2800 Gen10 have 16 SFF NVMe BP.

  

While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company

>Accept or Kudo

Almantas Klimas
Occasional Advisor

Re: Is there a chance 2 NVMe SSDs limit per 2000 Gen10+ node will be eliminated?

Why this blocking limitation is "by design"? This eliminates majority of all possible Apollo 2000 G10+ use cases (blocking usage as CDN servers, as DB servers, or as very  high volume web app servers).  Plenty of PCIeV4 lanes are not used.  128 PCIe lanes are available from AMD CPU, 3x16 for OCP and PCIe slots, 80 still available. For 6 SSDs, only 24 lanes are needed (4 lanes per one SSD), 56 still remain.

If 6x PCIe V4 NVMe SSDs would be supported instead of 2 on Apollo 2000 G10+, that solves all the rack unit waste that we observe with DL325G10+ and DL385G10+, and that's why we started to search for an alternative.  It should have been Apollo 2000 G10+ nodes,  using 0.5RU per one physical server. But 2 NVMe disks per server are not enough, so 0.5RU wasted per server instance with DL325G10+ with one CPU, and 1.5RU wasted per 2CPU DL385G10+ server for databases, where the only need above DL325G10+ were just 2 CPUs, and with 6 disks - that is OK for majority of DBs, due to NVMe SSDs being very fast. If you can pass this to engineer responsible for the Apollo 2000 G10+ desing mistake to have 6 disk slots for each server, but only allow 2 of them to be used for NVMe SSDs, please do it. I hope the PCI cabling updates can be done, so in next HW revision all 6 NVMe SSDs shall work OK.

In blade system era, with C7000 enclosure we had 16 servers per 10 RU, 0.63 RU per server. Apollo 2000 G10+ with 4 nodes per 2 RU effectively close the blade era at 0.5 RU per server, and with space for 6 SSDs, not 2 SSDs per server as in case of blades. We get much more universal servers in a very compact configuration. Apollo 2000 G10+ has it all to be the major DC server- if all 6 slots would be supported per server.  PCI slots give the flexibility to use 200 Gbps network cards, etc. Great server, I like n2600 chassis innovation, but cannot fit this system into most DC use scenarios before all 6 SSDs can be used per server. Thanks.

sudhirsingh
HPE Pro
Solution

Re: Is there a chance 2 NVMe SSDs limit per 2000 Gen10+ node will be eliminated?

Noted-Many thanks for this update.

While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company

>Accept or Kudo