StoreVirtual Storage
1752793 Members
5870 Online
108789 Solutions
New Discussion

Lefthand 4300 - storage system Inoperable after disk crash

 
thrasher
New Member

Lefthand 4300 - storage system Inoperable after disk crash

Hello.

 

We have Lefthand P4300 G2 7.2 TB SAS Starter SAN Solution, configured cluster with 2 storages and FOM. On storages configured RAID6. After one disk crashing, storage system, where this disk was mounted, had RAID degraded and go to Inoperable mode. After disk changed and rebuilding RAID, storage system has remained in Inoperable mode while it rebooted. 

 

And now I have questions, pls answer me if you know:

 

1) Why RAID was degraded, if RAID6 must be availability with 2 disks fail?

2) Why storage system didn't go to Normal state after rebuilding?

3) If in future 2 disks on different storages will fail, all SAN stop working? Where then reliability and Data Safety in this solution?

4) Can I configure RAID with spares to avoid same problems in future? (I not found it in management console)

 

Thanks in advance.

Sorry for my English.

Best regards,

Alex

3 REPLIES 3
Welton
New Member

Re: Lefthand 4300 - storage system Inoperable after disk crash

Hello,

did you ever solve your issue?  We have the same issue with our Units.

oikjn
Honored Contributor

Re: Lefthand 4300 - storage system Inoperable after disk crash

well, "degraded" is correct for losing one disk.  You are degraded any time you don't have double pairity...  In theory you could lose one more disk and remain "degraded", but if you lost a 3rd disk, the status woudl be "failed". 

 

Rebuilding a raid6 group takes a long time, so are you sure it was actually completed?

Ravi_K
HPE Pro

Re: Lefthand 4300 - storage system Inoperable after disk crash

Hi Alex

 

With Reference to your queries below

 

1) Why RAID was degraded, if RAID6 must be availability with 2 disks fail?

=> Already Answered => well, "degraded" is correct for losing one disk.  You are degraded any time you don't have double pairity...  In theory you could lose one more disk and remain "degraded", but if you lost a 3rd disk, the status would be "failed". 

 Rebuilding a raid6 group takes a long time, so are you sure it was actually completed?

 

2) Why storage system didn't go to Normal state after rebuilding?

=> This needs more troubleshooting to understand why it happened if the rebuild was complete. In some cases, even though 1 disk is reported faulty, it is possible that more disks have lot of medium errors on them and are close to failure but are not reported in CMC, the side effect is that when only the faulty seen  drive is replaced, the rebuild doesnt complete properly and there are chances that the new replaced drive will report bad immediately after rebuild. So it is always better to engage HP support for detailed log analysis, this way all drives with errors can be replaced at the same time to avoid the problem( Replacing multiple drives need node to be in repair mode and raid reconfigured, this is feasible only if the volumes are network raid protected , during this activity network raid protected volumes remain available but in unprotected state as 1 node is in repair mode). This issue of some drives with errors but not reported is currently being looked at by engineering and will be fixed in Lefthand OS 11.0 version.

 

3) If in future 2 disks on different storages will fail, all SAN stop working? Where then reliability and Data Safety in this solution?

=> As informed earlier already ,Raid Degarded doesn't mean offline, it is working but in degraded mode because of 1 drive failure.

 

4) Can I configure RAID with spares to avoid same problems in future? (I not found it in management console)

=> This option is not available with P4000 G2 systems and is introduced from selected G3 models.

 

I hope  this helps

 

Regards

Ravi

Accept or Kudo