- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- HP Array P410, HDD changed, but still predict to f...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-10-2016 03:46 PM
02-10-2016 03:46 PM
HP Array P410, HDD changed, but still predict to fail soon
On HP Proliant DL G6, one disk in RAID 1 on P410 array broken. It was HP EG0300FBDBR. I changed it on compatible HP model - HP EG0300FAWHV. (both 300 Gb, 10K)
But HP Array Configuration Utility still show me - that new HP EG0300FAWHV - 300 GB 2-Port SAS Drive at Port 1I : Box 1 : Bay 0 is predicted to fail soon.
New disk in server blinking green - is like "The drive is rebuilding, erasing, or it is part of an array that is undergoing capacity expansion or stripe migration."
But two days gone and status didn't change.
In RIS Event Log, last info:
*> Event 123 2016-02-08 11:56:54 Hot Plug Physical Drive Change Removed.
Physical drive number: 0x09. Configured drive flag: 1. Spare drive flag: 0. Big drive: 0x00000009. Enclosure Box: 00. Bay: 00 Event 124 2016-02-08 12:16:57 Hot Plug Physical Drive Change Inserted. Physical drive number: 0x09. Configured drive flag: 1. Spare drive flag: 0. Big drive: 0x00000009. Enclosure Box: 00. Bay: 00 Event 125 2016-02-08 12:16:57 Logical Drive Status State change. State change, logical drive 0x0000. Previous logical drive state (0x03): Logical drive is degraded. New logical drive state (0x04): Logical drive is ready for recovery operation. Spare status (0x00): No spare configured Event 126 2016-02-08 12:16:57 Logical Drive Status State change. State change, logical drive 0x0000. Previous logical drive state (0x04): Logical drive is ready for recovery operation. New logical drive state (0x05): Logical drive is currently recovering. Spare status (0x00): No spare configured Event 127 2016-02-08 12:51:51 Logical Drive Status State change. State change, logical drive 0x0000. Previous logical drive state (0x05): Logical drive is currently recovering. New logical drive state (0x00): Logical drive OK. Spare status (0x00): No spare configured Event 128 2016-02-09 03:23:22 Logical Drive Surface Analysis Surface Analysis pass information. Block count: 00000000. Drive No: 00. Starting Address: 00000848:00000000.*
I attached ADU report.
Is it mean that new disk still recovering? But why its status as "predict to fail" and why so long for recovering? Why HP utility didn't mark it as rebuilding?
Report ADU: https://www.dropbox.com/s/70ucdsiafzdwvfr/ADUReport.zip?dl=0
report from HPSSACLI: dropbox.com/s/jml43anepzhqyq5/HPSSACLI.txt?dl=0