- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: 8 Node cluster with 5 managers
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2019 04:27 AM
10-29-2019 04:27 AM
Hi all,
I'm trying to determine the theory behind how many nodes I can lose in my cluster and still have the data available. I currently have the following:
8 nodes in a single cluster
5 managers are in use
Volumes are configured in Network RAID10.
Any ideas how many nodes I could lose in the cluster before the data/volumes are offline? I always thought that if 2 or more node fail, they will go offline.
Thanks
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2019 07:26 AM
10-29-2019 07:26 AM
Re: 8 Node cluster with 5 managers
Don't worry I've managed to work it out
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2019 11:04 PM
10-29-2019 11:04 PM
Re: 8 Node cluster with 5 managers
Hi @aur ,
Can you please let me know how you fixed this problem. it'll be helpful. Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2019 01:11 AM
10-30-2019 01:11 AM
SolutionNo problem,
If you have 8 nodes, you can have all of the even nodes or all of the odd nodes fail without issue. The issue comes when you have a node that fails alongside another.
1,3,5,7 can fail and the volumes will still be online
2,4,6,8 can fail and the volumes will still be online
1,2 or 3,4 or 4,5 ...etc fail and the volumes will go offline, for example.
So when the documentation states half of the cluster can go offline, it is true, but it really depends which nodes are offline (obviously this is RAID10 and not RAID10+1 / RAID10+2).
Hope this helps someone out there.