ProLiant Servers (ML,DL,SL)

P440ar Radi 1+0 new SSD Failed twice

 
MadPat
Occasional Visitor

P440ar Radi 1+0 new SSD Failed twice

Hi folks

We have an ProLiant DL360 Gen9 with an P440ar in it.
Our raid 1+0 failed recently because one ssd died, so we ordered a new one.
First problem: the disk we used before are not avaiable anymore so our seller just got us an newer one:

Old one: Model: ATA LK0200GEYMR (200GB)

New one: Model: ATA VK000240GWSRQ (240GB)

We changed the disks and after a rebuild everything was fine untill we switched the load back to this server, then the new disk failed. After a DOA request we got a new one and the same happened again.

What is it with this Disk? When we reboot the server, the disk will rebuild and work untill we switch our loadbalancer back and then it dies again.

Does anyone know what is causing this?

Best regards,
MadPat

4 REPLIES 4
Suman_1978
HPE Pro

Re: P440ar Radi 1+0 new SSD Failed twice

Hi,

I would recommend to run the SSA SmartSSD wear gauge report and share it with HPE Support.

SmartSSD Wear Gauge
https://support.hpe.com/hpesc/public/docDisplay?docId=a00097137en_us

Alternatively you may update firmware for controller and SSD.

Thank You!
https://support.hpe.com/hpesc/public/home


I work for HPE

Accept or Kudo

MadPat
Occasional Visitor

Re: P440ar Radi 1+0 new SSD Failed twice

We allready updated the firmware of all ssd's because of this.
What does SmartSSD help me with on a new ssd? 

 

Suman_1978
HPE Pro

Re: P440ar Radi 1+0 new SSD Failed twice

Hi,

SmartSSD Wear Gauge report helps to identify any issues with storage system.

https://techlibrary.hpe.com/docs/synergy/shared/ts/GUID-CD9B90EE-E5C1-4948-A85F-CEA5A03BF960.html

Thank You!
https://support.hpe.com/hpesc/public/home


I work for HPE

Accept or Kudo

MadPat
Occasional Visitor

Re: P440ar Radi 1+0 new SSD Failed twice

Okay, so we just updated every firmware of every device involved in this to the latest version and those gauge software and the diagnostic tool did not report any unusual stuff but still, as soon as we switch load to that server, the disk will fail in a matter of 2-3 days.