- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Re: How to survive EVA4000 enclosure failures?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 08:57 AM
тАО10-01-2008 08:57 AM
I'm a bit concerned regarding potential enclosure failures in our EVA4000 system.
Have any of you experienced problems with this (is it common?), and I┬┤m not talking about the active replaceable components such as psu,fans and such but the passive box itself, backplane etc?
And this leads to the obvious question, how can I minimize the downtime in case of such event without replication? Should I have a spare one and replace the broken unit and move all physical disks to it or what?
Any ideas?
Running on 2C2D today.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 09:16 AM
тАО10-01-2008 09:16 AM
Re: How to survive EVA4000 enclosure failures?
for EVA3000/4000/4100 this is the problem because of the max number of enclosure is 4 and the RSS is 6 physical disks at minimum...
You need the 2C8D to have full vertical solution - and you can even loose the whole enclosure without the production downtime.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 09:25 AM
тАО10-01-2008 09:25 AM
Re: How to survive EVA4000 enclosure failures?
Your idea of having spare enclosure does not work, because in EVAs all the data is distributed across all physical disks, so the disks are virtual and in the whole enclosure failure (1 enclosure is 14 disks max) there is multiple disk failures in a disk group.
So then the only good/safe/quick solution is remote replication (2 EVAs 4000/4100) with either the CLX (automatic failover feature) or CA (manual failover feature)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 09:52 AM
тАО10-01-2008 09:52 AM
Re: How to survive EVA4000 enclosure failures?
I've managed a few EVAs and don't recall ever seeing a complete enclosure failure.
Yes I've had failure of PSUs, I/O modules and the like, but never the actual enclosure.
The closest I've ever seen was a self induced failure of a shelf in a HSG80 system, which are essentially the same units, just SCSI rather than fibre. In that case I removed a failed PSU and didn't replace it straight away. The whole shelf shut down without the second PSU in place, after a few minutes. Unsurprisingly, I never actually tested whether an EVA shelf would do the same, but I guess they will...
Hope this helps,
Regards,
Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 10:18 AM
тАО10-01-2008 10:18 AM
Re: How to survive EVA4000 enclosure failures?
Yes, the EVA has that PSU time thing.
You have x minutes before it shuts down to prevent overheating.
Having a spare "offline" enclosure does not prevent downtime of course but what I meant was, will it work to just switch the enclosure and move the physical disks, psu's and everything, over to get it up and running again (coldswap) or does the HSVs detect the enclosure not being the same and refuse to bring to groups online again? There is no intelligence in the box itself right?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 10:48 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 11:22 AM
тАО10-01-2008 11:22 AM
Re: How to survive EVA4000 enclosure failures?
Points on the way.
/R
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2008 11:33 AM
тАО10-01-2008 11:33 AM
Re: How to survive EVA4000 enclosure failures?
> and refuse to bring to groups online again?
Not at all - that would be a great design error - how could you ever deal with a defective disk drive enclosure?
On an older model, I have once removed both controllers and replaced them with other modules and everything went fine.
On a different system I have removed all disk drives and stored them away. Later I put them into yet another controller/disk enclosure setup and all data was still present.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-04-2008 02:57 AM
тАО10-04-2008 02:57 AM
Re: How to survive EVA4000 enclosure failures?
With a shelf you have three things that will potentially bring a shelf down after 7 mins:
1: physical removal of a PSU - NOT the fact that it has failed [unless it is causing other problems as well]
2: BOTH fans fail
3: two out of three temp sensor groups go over temp
Mark...