- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- MSA1000 disk fail caused Device/Pool deactivation
Disk Enclosures
1748109
Members
4734
Online
108758
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-04-2005 11:46 AM
09-04-2005 11:46 AM
MSA1000 disk fail caused Device/Pool deactivation
I have recently installed and built an entry-level SAN using several DL360G4 servers (with QLogic HBA's) connected to a new MSA1000 (SAN switch 2/8 + two MSA30's).
The servers have NetWare OES (V6.5 SP3) and the firmware on the MSA1000 was updated to the latest version (FabricOS v3.2.0a, MSA V4.48)
One of the U320 146GB drives failed last night, yet despite the MSA selecting a hot spare and starting an array rebuild as would be expected, every Pool and Volume on the NetWare server deactivated with "device failure" messages.
I was under the impression that the point of having a RAID array was that a drive failure would be seamlessly repaired and that functionality would not be impaired (only slowed a little depending on the priority of the Rebuild setting).
I do not understand why Pools residing on totally separate arrays (I have defined 4 separate RAID5 arrays across the MSA cabinets) also failed, nor why the server had to be power cycled in order to allow any volumes at all to be seen and mounted by clients.
There is no redundancy built into the SAN infrastructure (no secondary SAN switch or duplexed fibres and HBA's) but then, even if there was, this device failure would I assume still have occurred, as it appears was a problem involving the MSA controller failing a low-level NetWare OS diskaccess request when the drive failed rather than responding with the requested data while repairing the fault in the background.
Any ideas on what is happening here?
Is this a configuration error or a fault in the MSA?
The servers have NetWare OES (V6.5 SP3) and the firmware on the MSA1000 was updated to the latest version (FabricOS v3.2.0a, MSA V4.48)
One of the U320 146GB drives failed last night, yet despite the MSA selecting a hot spare and starting an array rebuild as would be expected, every Pool and Volume on the NetWare server deactivated with "device failure" messages.
I was under the impression that the point of having a RAID array was that a drive failure would be seamlessly repaired and that functionality would not be impaired (only slowed a little depending on the priority of the Rebuild setting).
I do not understand why Pools residing on totally separate arrays (I have defined 4 separate RAID5 arrays across the MSA cabinets) also failed, nor why the server had to be power cycled in order to allow any volumes at all to be seen and mounted by clients.
There is no redundancy built into the SAN infrastructure (no secondary SAN switch or duplexed fibres and HBA's) but then, even if there was, this device failure would I assume still have occurred, as it appears was a problem involving the MSA controller failing a low-level NetWare OS diskaccess request when the drive failed rather than responding with the requested data while repairing the fault in the background.
Any ideas on what is happening here?
Is this a configuration error or a fault in the MSA?
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2005 02:19 PM
09-06-2005 02:19 PM
Re: MSA1000 disk fail caused Device/Pool deactivation
No replies.
Moving to Storage Area Network (SAN) list as a better match for the problem.
Moving to Storage Area Network (SAN) list as a better match for the problem.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP