- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE 3PAR StoreServ Storage
- >
- 3PAR Node double fault Action Plan.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2024 08:31 PM - last edited on 01-16-2024 07:54 AM by support_s
01-15-2024 08:31 PM - last edited on 01-16-2024 07:54 AM by support_s
3PAR Node double fault Action Plan.
I'm considering 3PAR's account support plan.
Please tell me there.
How can I recover if two nodes fail in 3PAR 7400/8400?
I think the Node cluster is down because 2 of the 4 Nodes are faulty.
I don't think node rescue will work in that state. It's correct?
What would be the solution in that case?
Is it OOTB? If we perform OOTB, Custmer data will be deleted. I want to avoid that.
Double node failure is extremely rare, but I think we should understand the recovery procedure.
Please give me some advice.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2024 05:23 AM - edited 01-16-2024 05:28 AM
01-16-2024 05:23 AM - edited 01-16-2024 05:28 AM
Re: 3PAR Node double fault Action Plan.
Hi 555-denoh,
In an HPE 3PAR system, a double node failure is indeed rare and critical.
You correctly pointed out that node rescue may not work if there are two faulty nodes in the cluster. Multiple nodes may not respond when a node rescue is attempted, but it usually works when a single node is unresponsive.
And OOTB (Out of the Box) procedure is not recommended in your case.
I recommended you contact HPE support immediately if two nodes have failed. They can provide specific guidance based on the details of the system and the nature of the failure.
HPE support team guides you on how to take the backup of your critical data and restore it after the replacement of failed nodes.
Hope this helps.
Regards,
Satish
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2024 05:50 PM
01-16-2024 05:50 PM
Re: 3PAR Node double fault Action Plan.
Hello Satish san
Thank you for your Advice.
I was an HPE employee until half a year ago.
When I was there, I was concerned about how to resolve 3PAR node double failures.
However, I retired from HPE before I could resolve it.
After that, I joined a third-party maintenance company and was put in charge of 3PAR.
I would like to receive advice from anyone who has experience in resolving 3PAR Node double failures.
Thanks, best regards.