- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- DL380Gen10 with all-NVMe for SuSE OpenStack Cloud,...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-17-2019 02:25 AM
07-17-2019 02:25 AM
DL380Gen10 with all-NVMe for SuSE OpenStack Cloud, NVMe handling, limits, bottlenecks
Hi,
we plan to use Dl380Gen10 servers for our new SuSE OpenStack Cloud / SuSE Enterprise Storage (Ceph). As NVMes are getting cheaper and it would eliminate the need for array controllers we think about an all-NVMe setup with 2 NVMEs for boot (SLES) and 2 or more NVMes for Ceph OSD's
I found this HPE document:
https://h20195.www2.hpe.com/v2/getpdf.aspx/4aa6-3464enw.pdf
Q: Can NVMe drives be used for operating system boot purposes?
A: NVMe 2.5" SSDs work in UEFI and legacy modes, but there is no boot support at this time. The drive performance would be best used for
workloads that demand faster data access
Also, Hot Swap seems to supported, but not Hot Add.
Questions:
- with NVMes only software RAID is possible?
- is it now possible to boot SLES from NVMe and a sw raid1? I heard that this should be woking with SES.
- Hot Swap is nice, but a downtime for Hot Add (Expansion) is not so nice, is this still the case?
- what about the bandwith needed for a large number of NVMe devices? What is the bottlenck if we want to put 10+ NVMes in a serve? PCI bus?
Anyone here using (all) NVMes setup with SLES or any other Linux, how is this working is operations? Other things than the mentioned that I should think about?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-30-2019 02:02 AM
07-30-2019 02:02 AM
Re: DL380Gen10 with all-NVMe for SuSE OpenStack Cloud, NVMe handling, limits, bottlenecks
Hi,
- with NVMes only software RAID is possible?
If your server is using the S100i controller, it is essentially software RAID as the RAID function is controlled by a software driver.
The RAID feature of this controller is not supported for Linux. You can use either and "H" or "P" series hardware based Smart Array controllers.
Another option is to use the LSRRB (Linux Software Raid Redundant Boot) solution with the S100i in SATA AHCI mode. LSRRB utilizes the RAID 1 functionality built into the OS,
with the addition of making both drives bootable https://downloads.linux.hpe.com/SDR/project/lsrrb/
The document explains disk contoller options for Gen10 ProLiant.
HPE Smart Array SR Gen10 User Guide
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00019059en_us
- is it now possible to boot SLES from NVMe and a sw raid1? I heard that this should be woking with SES.
https://downloads.linux.hpe.com/SDR/project/lsrrb/
- Hot Swap is nice, but a downtime for Hot Add (Expansion) is not so nice, is this still the case?
Yes its still not supported
- what about the bandwith needed for a large number of NVMe devices? What is the bottlenck if we want to put 10+ NVMes in a serve? PCI bus?
https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=a00008180enw
Please refer to page 31 of the above mentioned link for max supported NVMe drives
Anyone here using (all) NVMes setup with SLES or any other Linux, how is this working is operations? Other things than the mentioned that I should think about?
I am a HPE Employee