- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- DL360 G9 and the Turbo Z Drive compatability?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-23-2022 11:59 AM - last edited on 05-24-2022 12:39 AM by support_s
05-23-2022 11:59 AM - last edited on 05-24-2022 12:39 AM by support_s
We have a bank of 5 x DL360G9 servers running a no-SQL DB cluster. We have learned tha direct IO is much faster and were advised to move to SSD/NVME. I picked up 2 of the Z Turbo g2 PCIe adapters and have both a Samsung 983 NVMe and a Sabrent NVMe. Running Ubuntu 18.04 and neither the BIOS nor the OS see either drive in either of the PCEi slots on the riser (treid both slots individually on 2 seperate machines). Is there a way to utilize the Turbo Z or should I go with a 3rd party PCIe->NVMe adapter?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-23-2022 09:08 PM
05-23-2022 09:08 PM
Re: DL360 G9 and the Turbo Z Drive compatability?
Hi,
Please check the ProLiant DL360 Gen9 Server QuickSpecs for NVMe requirements and compatibility.
Same way, check if Turbo Z drive is compatible with ProLiant models from that vendor website.
There are couple of Posts regarding the same.
https://community.hpe.com/t5/ProLiant-Servers-ML-DL-SL/Will-HP-Turbo-Z-Drive-work-on-Proliant-Server/m-p/6746484
https://community.hpe.com/t5/ProLiant-Servers-ML-DL-SL/HP-Turbo-Z-Drive-on-ProLaint-ML350/m-p/6549648
Thank You!
I work with HPE but opinions expressed here are mine.
Recent Support Video Releases
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-26-2022 11:35 AM
05-26-2022 11:35 AM
SolutionIf someone comes across this post, here is what we have found. The articles linked from the rep are nothing more than the same question with no response. We have found that the DL360 and 380 G9 servers do NOT recognize the HP Turbo Z G2 PCIe card regardless of the NVMe installed. However, a simple generic PCIE NVMe adapter from Amazon worked like a charm. It even shows as a bootable device.
Also, we ran 4k fragmented file IO tests and here are the results:
Raid 6 - SAS (8 x 600GB 15K RPM SAS 12 drives)
Run status group 0 (all jobs):
READ: bw=99.8KiB/s (102kB/s), 99.8KiB/s-99.8KiB/s (102kB/s-102kB/s), io=5992KiB (6136kB), run=60019-60019msec
WRITE: bw=105KiB/s (107kB/s), 105KiB/s-105KiB/s (107kB/s-107kB/s), io=6288KiB (6439kB), run=60019-60019msec
NVME - Non-Direct I/O
Run status group 0 (all jobs):
READ: bw=3182KiB/s (3259kB/s), 3182KiB/s-3182KiB/s (3259kB/s-3259kB/s), io=186MiB (196MB), run=60001-60001msec
WRITE: bw=3162KiB/s (3238kB/s), 3162KiB/s-3162KiB/s (3238kB/s-3238kB/s), io=185MiB (194MB), run=60001-60001msec
NVME - Direct I/O
Run status group 0 (all jobs):
READ: bw=3486KiB/s (3570kB/s), 3486KiB/s-3486KiB/s (3570kB/s-3570kB/s), io=204MiB (214MB), run=60001-60001msec
WRITE: bw=3472KiB/s (3555kB/s), 3472KiB/s-3472KiB/s (3555kB/s-3555kB/s), io=203MiB (213MB), run=60001-60001msec
Using fio, read/write worst-case I/O load 4k fragmented file test. The NVMe resulted in a 34X increase in IO performance.