- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers - Netservers
- >
- Degraded SATA Raid1 problem
ProLiant Servers - Netservers
1753971
Members
7756
Online
108811
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-06-2006 08:35 AM
07-06-2006 08:35 AM
Degraded SATA Raid1 problem
hi there,
I encountered the following issue tht I want to share with you.
ML150 with 4 channel SATA controller.
2 160GB SATA HDDs configured in raid1
W2K3 SBS
Suddenly the system won't boot reporting system hive corruption.
W2k3 recovery console do not work
W2k professional recovery console starts. replaced system hive with one in repair folder, system boots but never reaches logon. (hung in applying network ...).
restarded recovery console and run chkdsk. it runs very slow but goes to end with no error.
At this point I open adaptec bios (raid status is optimal) and run surface analysis on disks. Both HDDs fail surface analysis!!!
I break raid and try to boot with each disk alone but both don't work.
I replace controller, disks and restore from backup and go to sleep thinking about 2 failed disk and a raid1 status optimal...
I installed a brand new ML110 with 2 new SATA HDDs in raid1 with another W2k3 SBS.
then I removed a good HDD bringing the raid to degraded state. then I plugged one of the 2 failed HDDs of the ML150 and the raid rebuilds with no problem!!! Then I run surface analysis on the previously failed HDD and reports no error!!!
I break the raid of ML110 again, insert the other failed HDD and run surface analysis. It fails. I start rebuild on the failed HDD and rebuilds!!! then I check this HDD and surface analysis goes with no error!!!
So what am I missing?
Why the controller of ML150 did not reported any problem on the two disks? and why the two disks are now running on the ML110?
Any comment/suggestion would be very appreciated
I encountered the following issue tht I want to share with you.
ML150 with 4 channel SATA controller.
2 160GB SATA HDDs configured in raid1
W2K3 SBS
Suddenly the system won't boot reporting system hive corruption.
W2k3 recovery console do not work
W2k professional recovery console starts. replaced system hive with one in repair folder, system boots but never reaches logon. (hung in applying network ...).
restarded recovery console and run chkdsk. it runs very slow but goes to end with no error.
At this point I open adaptec bios (raid status is optimal) and run surface analysis on disks. Both HDDs fail surface analysis!!!
I break raid and try to boot with each disk alone but both don't work.
I replace controller, disks and restore from backup and go to sleep thinking about 2 failed disk and a raid1 status optimal...
I installed a brand new ML110 with 2 new SATA HDDs in raid1 with another W2k3 SBS.
then I removed a good HDD bringing the raid to degraded state. then I plugged one of the 2 failed HDDs of the ML150 and the raid rebuilds with no problem!!! Then I run surface analysis on the previously failed HDD and reports no error!!!
I break the raid of ML110 again, insert the other failed HDD and run surface analysis. It fails. I start rebuild on the failed HDD and rebuilds!!! then I check this HDD and surface analysis goes with no error!!!
So what am I missing?
Why the controller of ML150 did not reported any problem on the two disks? and why the two disks are now running on the ML110?
Any comment/suggestion would be very appreciated
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP