ProLiant Servers (ML,DL,SL)
1819700 Members
3053 Online
109605 Solutions
New Discussion

DL360 G9 and the Turbo Z Drive compatability?

 
SOLVED
Go to solution
GTSupport
Collector

DL360 G9 and the Turbo Z Drive compatability?

We have a bank of 5 x DL360G9 servers running a no-SQL DB cluster.  We have learned tha direct IO is much faster and were advised to move to SSD/NVME.  I picked up 2 of the Z Turbo g2 PCIe adapters and have both a Samsung 983 NVMe and a Sabrent NVMe.  Running Ubuntu 18.04 and neither the BIOS nor the OS see either drive in either of the PCEi slots on the riser (treid both slots individually on 2 seperate machines).  Is there a way to utilize the Turbo Z or should I go with a 3rd party PCIe->NVMe adapter?

2 REPLIES 2
Suman_1978
HPE Pro

Re: DL360 G9 and the Turbo Z Drive compatability?

Hi,

Please check the ProLiant DL360 Gen9 Server QuickSpecs for NVMe requirements and compatibility.
Same way, check if Turbo Z drive is compatible with ProLiant models from that vendor website.

There are couple of Posts regarding the same.
https://community.hpe.com/t5/ProLiant-Servers-ML-DL-SL/Will-HP-Turbo-Z-Drive-work-on-Proliant-Server/m-p/6746484
https://community.hpe.com/t5/ProLiant-Servers-ML-DL-SL/HP-Turbo-Z-Drive-on-ProLaint-ML350/m-p/6549648

Thank You!
I work with HPE but opinions expressed here are mine.
Recent Support Video Releases



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
GTSupport
Collector
Solution

Re: DL360 G9 and the Turbo Z Drive compatability?

If someone comes across this post, here is what we have found.  The articles linked from the rep are nothing more than the same question with no response.  We have found that the DL360 and 380 G9 servers do NOT recognize the HP Turbo Z G2 PCIe card regardless of the NVMe installed.  However, a simple generic PCIE NVMe adapter from Amazon worked like a charm.  It even shows as a bootable device.

Also, we ran 4k fragmented file IO tests and here are the results:

Raid 6 - SAS (8 x 600GB 15K RPM SAS 12 drives)
Run status group 0 (all jobs):
READ: bw=99.8KiB/s (102kB/s), 99.8KiB/s-99.8KiB/s (102kB/s-102kB/s), io=5992KiB (6136kB), run=60019-60019msec
WRITE: bw=105KiB/s (107kB/s), 105KiB/s-105KiB/s (107kB/s-107kB/s), io=6288KiB (6439kB), run=60019-60019msec

NVME - Non-Direct I/O
Run status group 0 (all jobs):
READ: bw=3182KiB/s (3259kB/s), 3182KiB/s-3182KiB/s (3259kB/s-3259kB/s), io=186MiB (196MB), run=60001-60001msec
WRITE: bw=3162KiB/s (3238kB/s), 3162KiB/s-3162KiB/s (3238kB/s-3238kB/s), io=185MiB (194MB), run=60001-60001msec

NVME - Direct I/O
Run status group 0 (all jobs):
READ: bw=3486KiB/s (3570kB/s), 3486KiB/s-3486KiB/s (3570kB/s-3570kB/s), io=204MiB (214MB), run=60001-60001msec
WRITE: bw=3472KiB/s (3555kB/s), 3472KiB/s-3472KiB/s (3555kB/s-3555kB/s), io=203MiB (213MB), run=60001-60001msec

Using fio, read/write worst-case I/O load 4k fragmented file test.  The NVMe resulted in a 34X increase in IO performance.