- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- Logical drive fails in RAID 1 array when physical ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-16-2016 12:59 PM
тАО06-16-2016 12:59 PM
Hi
I'm having a problem with server which has its C: drive on a RAID 1 array controlled by a Smart Array 410 controller. It started with the server suddenly dying and couldn't start up again due to "No boot device.." (or something like that) error. Via the iLO card I could see one of the disks had failed, but that should be no problem on a RAID 1 array.
I powered down the server and booted it from a SmartStart CD from where I could start the Array Configuration Utility. In there I could see that the logical drive in the array had been disabled, but now both physical disks was reporting OK status. I enabled the logical drive again and restarted the server and ut came up just fine. From the array diagnostics utiliy all seemed fine, both the logical drive and the two physical drives was reporting no errors.
The plan now was to replace the physical disk that originally had failed, but as soon as I pulled out the disk the OS lost connection to the logical drive again and soon after Windows crashed.
I powered down the server, reinserted the physical disk that originally reported Failure, re-enabled the logical drive again and rebooted the server and it came up again just fine again.
But now what?
Everywhere the logical drive and the two physical drives seems ok, but it seems like if one of the physical disks fails or is pulled out, the logical drive fails making it impossible for me to change the physical disks and makes the logical drive vulnerable if a physical drive fails again.
Any ideas of what to do to resolve this?
The server in question is a HP DL380G7 with two 146GB SAS 15K disks installed.
Thank you in advance for any input.
/Jesper
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-19-2016 10:59 AM - edited тАО06-19-2016 11:00 AM
тАО06-19-2016 10:59 AM - edited тАО06-19-2016 11:00 AM
SolutionI found the problem. For some weird reason the server was set up with RAID 0 (striping) instead of RAID 1 (morroring). No wonder the system crashed when one of the disks failed ;-)
Now I face a completely different problem. How to migrate a RAID 0 drive to a RAID 1 drive?
Vould it be possible to purchase two 300 GB disks, configure them in a RAID 1 array and somehow transfer the Boot/OS drive to the new logical drive using a offline tool of some kind?
BTW, the controller is a P410i, not a P410 as i wrote i my first post.
Best regards,
Jesper
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-20-2016 04:59 AM
тАО06-20-2016 04:59 AM
Re: Logical drive fails in RAID 1 array when physical drive fails or is removed
This thread explains how to migrate from RAID 0 to RAID 1. Once you are migrated to RAID 1 you can then work on swapping in the 300GB drives in place of the 146GB drives. There is a seperate procedure for that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-20-2016 05:43 AM
тАО06-20-2016 05:43 AM
Re: Logical drive fails in RAID 1 array when physical drive fails or is removed
While waiting for input I build a similar setup as the one in question using a old HP server, a SmartArray controller and two 36 GB SAS disks in a RAID 0 configuration. I installed OS on the 72 GB logical drive, configured it and powered down the server. Then I connected a USB disk and booted on a CloneZilla CD. I created a image of the existing logical drive on the USB disk and powered down the server again. Then I removed the two 36 GB disks, inserted two disk 72 GB SAS disks, powered up the server and created a RAID 1 array using the two 72GB disks.Then booted on CloneZilla CD and restored the image to my new 72 GB logical drive, removed CD and USB disks and restarted the server.
It booted without any problems :-)
To make sure I always could go back to my original I powered down the server, removed the two 72 GB drivers and inserted the two 36 GB drives before powering up the server again. As expected it booted with no problems.
But I will look into the thread you provided and maybe do it that way instead.
Thank you for your input.
Best regards,
Jesper
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-20-2016 06:57 AM
тАО06-20-2016 06:57 AM
Re: Logical drive fails in RAID 1 array when physical drive fails or is removed
If your Clonezilla method is working, I'd probably go that route as it will be a lot quicker and you won't have to mess with the partition tables in Windows to reclaim the new space
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-01-2016 04:54 AM
тАО07-01-2016 04:54 AM
Re: Logical drive fails in RAID 1 array when physical drive fails or is removed
The CloneZilla method worked just fine. That way I also had the existing disks out of the server making sure I always had a working configuration to revert to if something happened.
I made the image, removed the existing drives then inserted the new drives and configured them as one logical drive on a RAID 1 array. Restored the CloneZilla image to the logical drive and restarted the server. The server started up just fine. I then tested the RAID by pulling one of the disks. All continued to work just fine, but with a degraded logical disk as expected. After inserting the disk again the RAID rebuilded as it should.
I'm happy now ;-)
Best regards,
Jesper