ProLiant Servers (ML,DL,SL)
cancel
Showing results for 
Search instead for 
Did you mean: 

Logical drive fails in RAID 1 array when physical drive fails or is removed

SOLVED
Go to solution
JesperCederberg
Occasional Contributor

Logical drive fails in RAID 1 array when physical drive fails or is removed

Hi

I'm having a problem with server which has its C: drive on a RAID 1 array controlled by a Smart Array 410 controller. It started with the server suddenly dying and couldn't start up again due to "No boot device.." (or something like that) error. Via the iLO card I could see one of the disks had failed, but that should be no problem on a RAID 1 array.

I powered down the server and booted it from a SmartStart CD from where I could start the Array Configuration Utility. In there I could see that the logical drive in the array had been disabled, but now both physical disks was reporting OK status. I enabled the logical drive again and restarted the server and ut came up just fine. From the array diagnostics utiliy all seemed fine, both the logical drive and the two physical drives was reporting no errors.

The plan now was to replace the physical disk that originally had failed, but as soon as I pulled out the disk the OS lost connection to the logical drive again and soon after Windows crashed.

I powered down the server, reinserted the physical disk that originally reported Failure, re-enabled the logical drive again and rebooted the server and it came up again just fine again.

But now what?

Everywhere the logical drive and the two physical drives seems ok, but it seems like if one of the physical disks fails or is pulled out, the logical drive fails making it impossible for me to change the physical disks and makes the logical drive vulnerable if a physical drive fails again.

Any ideas of what to do to resolve this?

The server in question is a HP DL380G7 with two 146GB SAS 15K disks installed.

Thank you in advance for any input.

/Jesper

5 REPLIES
JesperCederberg
Occasional Contributor
Solution

Re: Logical drive fails in RAID 1 array when physical drive fails or is removed

I found the problem. For some weird reason the server was set up with RAID 0 (striping) instead of RAID 1 (morroring). No wonder the system crashed when one of the disks failed ;-)

 

Now I face a completely different problem. How to migrate a RAID 0 drive to a RAID 1 drive?

 

Vould it be possible to purchase two 300 GB disks, configure them in a RAID 1 array and somehow transfer the Boot/OS drive to the new logical drive using a offline tool of some kind?

BTW, the controller is a P410i, not a P410 as i wrote i my first post.

 

Best regards,

Jesper

Jimmy Vance
HPE Pro

Re: Logical drive fails in RAID 1 array when physical drive fails or is removed

This thread explains how to migrate from RAID 0 to RAID 1. Once you are migrated to RAID 1 you can then work on swapping in the 300GB drives in place of the 146GB drives. There is a seperate procedure for that.

 

http://community.hpe.com/t5/System-Administration/Migrate-RAID0-to-RAID1-online-with-hpacucli/td-p/4756838

 

 




__________________________________________________
No support by private messages. Please ask the forum!      I work for HPE

If you feel this was helpful please click the KUDOS! thumb below!   
JesperCederberg
Occasional Contributor

Re: Logical drive fails in RAID 1 array when physical drive fails or is removed

While waiting for input I build a similar setup as the one in question using a old HP server, a SmartArray controller and two 36 GB SAS disks in a RAID 0 configuration. I installed OS on the 72 GB logical drive, configured it and powered down the server. Then I connected a USB disk and booted on a CloneZilla CD. I created a image of the existing logical drive on the USB disk and powered down the server again. Then I removed the two 36 GB disks, inserted two disk 72 GB SAS disks, powered up the server and created a RAID 1 array using the two 72GB disks.Then booted on CloneZilla CD and restored the image to my new 72 GB logical drive, removed CD and USB disks and restarted the server.


It booted without any problems :-)


To make sure I always could go back to my original I powered down the server, removed the two 72 GB drivers and inserted the two 36 GB drives before powering up the server again. As expected it booted with no problems.

But I will look into the thread you provided and maybe do it that way instead.

Thank you for your input.

Best regards,
Jesper

Jimmy Vance
HPE Pro

Re: Logical drive fails in RAID 1 array when physical drive fails or is removed

If your Clonezilla method is working, I'd probably go that route as it will be a lot quicker and you won't have to mess with the partition tables in Windows to reclaim the new space




__________________________________________________
No support by private messages. Please ask the forum!      I work for HPE

If you feel this was helpful please click the KUDOS! thumb below!   
JesperCederberg
Occasional Contributor

Re: Logical drive fails in RAID 1 array when physical drive fails or is removed

The CloneZilla method worked just fine. That way I also had the existing disks out of the server making sure I always had a working configuration to revert to if something happened.

 

I made the image, removed the existing drives then inserted the new drives and configured them as one logical drive on a RAID 1 array. Restored the CloneZilla image to the logical drive and restarted the server. The server started up just fine. I then tested the RAID by pulling one of the disks. All continued to work just fine, but with a degraded logical disk as expected. After inserting the disk again the RAID rebuilded as it should.

 

I'm happy now ;-)

 

Best regards,

Jesper