- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- HP DL180 G7 raid 5 failed
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-20-2018 12:05 AM
тАО02-20-2018 12:05 AM
HP DL180 G7 raid 5 failed
Hello,
I have 11 x 3TB disks in raid 5 + 1 spare. Yesterday, one disk failed, and raid stated to rebuild. few hours later, our DC support detect, that failed disk and replace it, but during the work thay remove wrong disk and raid change state to failed. Of course, DC team return normal disk, but raid still failed.
So, now I have next situation:
logicaldrive 2 (27.3 TB, RAID 5, Failed)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 3 TB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 3 TB, OK)
physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SATA, 3 TB, OK)
physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SATA, 3 TB, OK)
physicaldrive 1I:1:7 (port 1I:box 1:bay 7, SATA, 3 TB, OK)
physicaldrive 1I:1:8 (port 1I:box 1:bay 8, SATA, 3 TB, OK)
physicaldrive 1I:1:10 (port 1I:box 1:bay 10, SATA, 3 TB, OK)
physicaldrive 1I:1:11 (port 1I:box 1:bay 11, SATA, 3 TB, OK)
physicaldrive 1I:1:12 (port 1I:box 1:bay 12, SATA, 3 TB, OK)
physicaldrive 1I:1:13 (port 1I:box 1:bay 13, SATA, 0 MB, Failed)
physicaldrive 1I:1:14 (port 1I:box 1:bay 14, SATA, 3 TB, OK)
physicaldrive 1I:1:9 (port 1I:box 1:bay 9, SATA, 3 TB, OK, spare)
How I can return OK raid state?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-20-2018 01:08 AM - edited тАО02-20-2018 01:10 AM
тАО02-20-2018 01:08 AM - edited тАО02-20-2018 01:10 AM
Re: HP DL180 G7 raid 5 failed
Hello,
If the wrong drive was removed before the rebuild to the spare was finished, the data is most probably lost. If the rebuild to the spare drive was successfully finished and than the wrong drive removed, the RAID must be still healthy, because the failure tolerance at time of removal was "1".
Try booting the server with HPE ACU or HPE SSA and check if "re-enable logical drive" option is available. If not - you will most probably need to reconfigure the whole array.
I am an HPE Employee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-20-2018 07:27 AM
тАО02-20-2018 07:27 AM
Re: HP DL180 G7 raid 5 failed
I tryed to reenable, but, it didn't help
hpssacli ctrl slot=1 ld 2 modify reenable
Warning: Any previously existing data on the logical drive may not be valid or
recoverable. Continue? (y/n) y
Error: This operation is not supported with the current configuration. Use the
"show" command on devices to show additional details about the
configuration.
Reason: Array status not ok