HPE SimpliVity
1752733 Members
5910 Online
108789 Solutions
New Discussion

Re: How many node can fail in the federation?

 
ChuaBaoGio
Visitor

How many node can fail in the federation?

I have 02 Cluster (4 Node per Cluster) in the same Federation.

I want to know how many nodes can faulty per cluster and Federation

 

Thanks

 

6 REPLIES 6
OzgurT
HPE Pro

Re: How many node can fail in the federation?

Its depent on hive placement.Simplivity can handle single node failer and disk failures rest of node but if primary and second copy  of vm  or vms is not on the  same failed two nodes than system can handle two node fail at the same time.For example  if you have 8 vms and 4 nodes svt cluster and image in node 1 and node 2 keeps primary and second copy for vm1,vm2,vm3,vm4 and node3 and node 4 keep vm5,vm6,vm7,vm8.If node 1 and node 3 fails at the same time or node1 and node4 or node 2 and node 3 or node 2 and node4 then siplivity can handle 2 node failer at the same time   


I work for HPE

Accept or Kudo

OzgurT
HPE Pro

Re: How many node can fail in the federation?

If you have two cluster with each have 4 nodes then you can loss one or two nodes for cluster level and two or 4 nodes  on Federation level depent on the hive placement 


I work for HPE

Accept or Kudo

DeclanOR
Respected Contributor

Re: How many node can fail in the federation?

Hi @ChuaBaoGio 

Federation wide, the amount of failures tolerated depends on the size of your federation.

When we speak about node failures, what we are really interested in is the cluster level. We can withstand a single node failure without any issues in clusters with more than one node. Single node clusters is a different story obviously. More than one node failing (in a cluster containing two or more nodes in the same cluster) will potentially have data unavailability depending on the placement of the primary and seconday data copies as Ozgur mentions. If two nodes in a four node cluster fail for example, then VM's that have a primary AND secondary data copy on both failed node A, and B, will encounter data unavailability - (primary node A, secondary node B or vice versa)

NOTE: A functioning arbiter running off of the cluster it arbitrates against also has an important role to play in this.

There is also a higher level of protection achieved through the use of stretch cluster functionality. With this functionality, a cluster is split up into two zones. It must contain an even amount of nodes on each side. In a four node stretch cluster, with 2 nodes on site A and 2 nodes on site B, we can withstand the loss of an entire site. VM's on site B if it failed for example, would reboot on site A. With the release of 3.7.10, we can now withstand even additional node failures on the surviving site.

I recommend looking at this video in our video library for more information about stretch clusters.

Video Link: https://support.hpe.com/hpesc/public/videoDisplay?videoId=vtc00000092en_us

To be honest, your question is a bit vague, and has many factors that contribute to a "correct" answer. Each failure scenario is different, depending on how your environment is configured and whether your arbiter is installed as per best practice etc.

Hope this helps.

DeclanOR  #I am  a HPE Employee

 

P.S: Moderator Edit: Post edited and replaced with the correct link. 

Accept or Kudo



ChuaBaoGio
Visitor

Re: How many node can fail in the federation?

Hi DeClanOR,

At the level federation with 08 node how many node can faulty
ChuaBaoGio
Visitor

Re: How many node can fail in the federation?

Hi can you give me any document about your post.

Thanks
DeclanOR
Respected Contributor

Re: How many node can fail in the federation?

Hi @ChuaBaoGio 

Please read the following blog posts. This is part 1 of a 5 part series. It will explain how data is managed in a SimpliVity system. I believe it will help you understand more clearly. Parts 2 -5 are linked from part 1.

https://community.hpe.com/t5/Shifting-to-Software-Defined/How-VM-data-is-managed-within-an-HPE-SimpliVity-cluster-Part-1/ba-p/7019102#.XfpOAWT7SUk

Thanks,

DeclanOR  #I am a HPE Employee

Accept or Kudo