StoreVirtual Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

StoreVirtual 4730 RAID OFF

 
Highlighted
Frequent Visitor

StoreVirtual 4730 RAID OFF

Yesterday morning we lost two 4730 from a cluster of 4 nodes. One of them just stopped working after a reboot it came back and the second due to two drive failures which happened almost at the same time, we didn't find out this until several hours after the failure.

All the LUNs are on replication status offline and critical, the node with the fail drives has its RAID0 off, I can removed from the group to reconfigure RAID because a LUN is replicating or being moved.

We don't have support any more, we did but no more.

Any assistance will be highly appreciated.

2 REPLIES 2
Highlighted
HPE Pro

Re: StoreVirtual 4730 RAID OFF

@VictorHugo Ortiz 

We might have to get more information to understand the configuration like the RAID configured on the Node and also Volumes configuration. We would also need to know the Storage System status. Is it just RAID which is Off or the Storage System is Offline as well.

We can reboot the Node and check the Logical Drive Status during POST. It might have got disabled due to 2 Drive Failures.


Regards,
Rachna K
I am an HPE Employee

Accept or Kudo

Highlighted
Frequent Visitor

Re: StoreVirtual 4730 RAID OFF

Here is some additional information with screenshots. If additional information is requred from specific log and node, please let me know.

Last Friday two nodes went out almost at the same time. ESC-SV-28 and ESC-SV-25. The node ESC-SV-28 lost two drives at that time. At the time I found out the two node had failed about 10 hours had pass but the nodes nor the cluster were not recovering. I replaced the two drives and the node ESC-SV-28 began to recover and as I was looking into the logs, I found a third drive was going to fail but the GUI wasn’t showing anything yet.

2 nodes.png

 

I couldn’t find in the longs anything that would tell me how long would take to recover or if it was going to recover and the appeared in this state for about 36 hours until I removed ESC-SV-28 from the cluster but as I tried to removed it from the management group, I was unable to do it because it was migrating data but it has been migrating data since last tuesday.

Fig-3.jpg

Fig-5.jpg

On Wednesday or Thursday the state of the LUNs change to

Fig-3.jpg

The LUNs are RAID-10 2 way mirrow.

For some reason although all nodes are license ESC-SV-28 shows as unlicences.

Fig-4.jpg

The LUNs are RAID-10 2 way mirrow. Node ESC/-SV-28 has been in migration in progress since Wednesday night, Thursday morning until today 8/8/2020