- Community Home
- >
- Servers and Operating Systems
- >
- Integrity Servers
- >
- failed SAS drive in an rx6600 11.23 system
-
- Forums
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Latin America
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-13-2013 09:59 AM
02-13-2013 09:59 AM
Please take a look at the attached file before plunging in...
There are four physical drives in this situation. We think that two drives (bay/slot 5 and 6) are (hw) raided together and are presented to the system as c3t0d0. Similarly, 7 and 8 are presented as c3t1d0.
At the software level c3t0d0 and c3t1d0 are mirrored to each other. (Talk about belts and suspenders!)
We know that drive 8 is having problems (yellow light is on).
Questions are
. Do we do anything other than simply replace this drive and let the system handle the rebuild? Will LVM remain happy?
. Is there a way to figure out what's really raided together?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-13-2013 11:33 AM - edited 02-13-2013 11:36 AM
02-13-2013 11:33 AM - edited 02-13-2013 11:36 AM
Solutionvg00 is NOT hardware mirrored!
drives in slots 7+8 are a hardware mirror, drives in slots 5 and 6 are not.
drive in slot 8 has failed, simply pull and replace, check the status.
Here you can see drives 7 and 8 are mirrored:
---------- LOGICAL DRIVE 5 ---------- Raid Level : RAID 1 Volume sas address : 0x68564241594c314 Device Special File : /dev/rdsk/c3t2d0 Raid State : DEGRADED Raid Status Flag : ENABLED Raid Size : 69878 Rebuild Rate : 0.00 % Rebuild Progress : 100.00 % Participating Physical Drive(s) : SAS Address Enc Bay Size(MB) Type State 0x5000c50005966ae9 1 7 70007 PRIMARY ONLINE 0x5000c5000596b269 1 8 70007 SECONDARY FAILED
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-13-2013 11:37 AM
02-13-2013 11:37 AM
Re: failed SAS drive in an rx6600 11.23 system
thanks!
i so hate to ask about rtfm....but is there something that discusses this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-13-2013 11:49 AM
02-13-2013 11:49 AM
Re: failed SAS drive in an rx6600 11.23 system
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-13-2013 11:51 AM
02-13-2013 11:51 AM
Re: failed SAS drive in an rx6600 11.23 system
How SAS drives are configured, why 7 and 8 are raided to 5 and 6....
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-13-2013 11:56 AM
02-13-2013 11:56 AM
Re: failed SAS drive in an rx6600 11.23 system
hmmm....maybe i'm not looking at this correctly?
7 and 8 are mirrored and are presented as the IR volume (c3t2d0)? and maybe 5 is c3t0d0 and 6 is c3t1d0?
(what a strange configuration....)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
02-13-2013 10:44 PM
02-13-2013 10:44 PM
Re: failed SAS drive in an rx6600 11.23 system
It's not that strange.
When you ordered the system there were several options for the boot disks.
1) connected to the internal SAS chip as "raw" disks
2) connected to the internal SAS chip as IR Volume (Integrated RAID)
3) connected to the optional Smart Array P400 as hardware RAID
So you likely have option 2), but likely the system was re-installed later.
Somebody at HP decided to install disk from slot 8 to 1 - nobody knows why.
However, IMHO the hardware RAID is the better choice, it's a real pain to replace such SAS disk in LVM mirror config.
Finally, swap your disk in slot 8 and your problem is solved.
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2021 Hewlett Packard Enterprise Development LP