StoreVirtual Storage
1752492 Members
5468 Online
108788 Solutions
New Discussion

HP VSA - Important data

 
kabhah
New Member

HP VSA - Important data

Hello

We have DELL R730 with Vmware 6 with 6 disks of 1.2 TB of DELL with Raid 5 -  VOLUME A .

On this Raid5 i installed VSA that i can access with iscsci - VOLUME B

 

It work fine but one day suddenly stop see the iscsci volume which set on the vsa .

but i can still controll the volume in the Hp managment and it show me all disk healthy and every thing is healthy and Normal .

 

but recently i got warning in the status which show "unrecoverable i/o " , and it start show me the data on one ESX and it didnt show me in the other and after a 2 minutes it stop show the volume B in both esxi .

i reboot the Hosting managment VSA and i get start again and it show me healthy and the same usage data but the esxi didnt recognize the Iscsi Volume .

 

the disk in the server are healthy and normal .

 

and i can access all the data in VOLUME A .

and also the VSA machine get up normally and quickly and can acces it .

 

I contaced hp support and they make for me another full snapshow or copy of the vsa on another disks .

and when i tried to mount the new volume it fail with mouning .

 

Very important that i didnt touch the volume and didnt write anything on it after i face this .

 

i have very important data on it

 

what can i do ?

Is there way to recover the data or at less one VMDK ?

1 REPLY 1
oikjn
Honored Contributor

Re: HP VSA - Important data

Short of support pulling out some miracle, the only alternative is a very $$ data recovery service which might be able to read the HDDs and recover the data from them. 

 

Unfortunately this is a very hard lesson, but one everybody should pay attention to.  If you have important data, you MUST store it in network raid 10 so it spans TWO nodes.  A single node is a point of failure and you just likely experianced that failure.  Its rare, but it can happen.  If the system had two nodes and NR10 setup, then this would be a non-issue where you could replace the disks, re-install a single node and regain data redundancy all without losing any production availability.