1829913 Members
2969 Online
109993 Solutions
New Discussion

MetroCluster

 
SOLVED
Go to solution
Duarte Guerra_1
New Member

MetroCluster

Hi All,

I'm in the process of designing a MetroCluster solution, whereby the primary data centre has 5 nodes, with multiple packages that fail-over onto 3 nodes in the secondary data centre. My query is, although the nodes are unequal, is this design supported? Please view attachment for a breakdown of package / node allocation.

Solution versions are as such: - HP-UX 11iv3, SG- A.11.19 and MetroCluster â A.09.00.

Your assistance would be much appreciated.
9 REPLIES 9
melvyn burnard
Honored Contributor

Re: MetroCluster

>the primary data centre has 5 nodes, with multiple packages that fail-over onto 3 nodes in the secondary data centre. My query is, although the nodes are unequal, is this design supported?
No this is an unsupported configuration.
You need equal number of hosts in each datacentre
My house is the bank's, my money the wife's, But my opinions belong to me, not HP!
Stephen Doud
Honored Contributor

Re: MetroCluster

See "Designing Disaster Recovery HA Clusters using Metrocluster and Continentalclusters" at http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02464583/c02464583.pdf.

Page 29 states:
"In the Metrocluster environment, the same number of systems must be present in each of the two data centers (Data Center A and Data Center B) whose systems are connected to the disk arrays. However, when using two arbitrator nodes in third location, you can have one node in one datacenter and two nodes in another datacenter."
Duarte Guerra_1
New Member

Re: MetroCluster

Hi All,

Thanks for your prompt response, but my question is that although it is not supported (amount of nodes should be the same). I would like to know what the constraints are for this unsupported configuration, meaning, is it due to server performance issues, network bandwidth, package compilation, etc?

As I see it, it may not be best practice, but why is it unsupported?

Furthermore, does the same rule apply to a ServiceGuard configuration (equal amount of nodes)?

Thanks once again for your assistance.
Stephen Doud
Honored Contributor

Re: MetroCluster

I don't know the reason for the same node issue on Metrocluster. I suspect it has to do with the array-based "Continuous Access" software since standard Serviceguard is not subject to such restrictions. Note that the standard Serviceguard cluster node separation can range only as far as connectivity limitations to a local array permits.
Rita C Workman
Honored Contributor

Re: MetroCluster

Well, unfortunaly I don't the resources to prove out my thoughts on this, and I appreciate that is what the book says as Melvyn and Stephen have pointed out. BUT...

The synchronization is done on the arrays, NOT the nodes. What fails over are packages, NOT the nodes. So, frankly I can't see why you would have to have equal nodes on both sides.

I would then question...would you need equal nodes in a Continental cluster? I'm thinking..you don't.

So is it because a Campus or Metro is really 'one cluster'. Well, then I still don't see the need for equal nodes. I can see the need for an abitrator, just as in a single data center and single multi-node cluster you would have a quorum server. But I still don't see the need for equal nodes in both sites.

Sorry that this does not answer your question Duarte, but maybe it will raise a couple thoughts to inspire answers that will.

Regards,
Rita

..Hi Stephen..!



So, although the book says you must have equal nodes in both data centers. I still don't get the absolute reason as to 'why'...

Rita C Workman
Honored Contributor

Re: MetroCluster

Hey Stephen,
...just caught that you alluded to the point that it might be a CA (HP disk synchronization product) issue.

But, CA or SRDF just mirror disk between arrays. Period.
When you split the connection you have two free standing arrays. Simple.

Disks, irregardless which data center, are then zoned mapped and mirrored to respective servers. And clearly you can zone the same disk to multiple nodes that area controlled by the SAN's director/switch.

I still can't see the 'reason' for equal nodes. The need for an arbitrator...yes. But equal nodes still has me scratching my head.

Rgrds,
Rita

melvyn burnard
Honored Contributor
Solution

Re: MetroCluster

From above:
>>The synchronization is done on the arrays, NOT the nodes. What fails over are packages, NOT the nodes. So, frankly I can't see why you would have to have equal nodes on both sides.
The reason is we do NOT support cluster lock disk or LUN in a Metrocluster, you MUST have either a Quorum Server or Arbitration nodes at a third location. These are used to do the nomal type of arbitration, but in the case of Metroclsuter, this is a Disaster Tolerant design, and if you do NOT have equal nodes, then in the event of losing a site (which is why you buy and use MC), then you will no longer have a 50% tie breaker situation, and hence you will NOT go to arbitration, and the Secondary site nodes will all TOC, not what you want!

>I would then question...would you need equal nodes in a Continental cluster? I'm thinking..you don't.
No, because this is a configuration with two or more clusters, so the rule does NOT apply. However, normal SG rules apply in each individual cluster



My house is the bank's, my money the wife's, But my opinions belong to me, not HP!
Rita C Workman
Honored Contributor

Re: MetroCluster

Hi Melvyn,

So, since it's really one big cluster at two locations, then all that is needed is to guarantee quorum can be established. And like Stephen said:

".... However, when using two arbitrator nodes in third location, you can have one node in one datacenter and two nodes in another datacenter."

Thanks, that's why you guys make the big bucks!!

/rcw
Duarte Guerra_1
New Member

Re: MetroCluster

Thanks for your feedback thus far - really appreciated.

In essence Melvyn, what you are alluding to, and please correct me if Iâ m wrong. In my primary DC, I have 5 nodes, secondary DC I have 3 nodes, totalling an 8-node cluster. If the primary DC fails, then that leaves me with 3 surviving nodes out of the 8, which is 38% of the total, thus arbitration not formed. Is my calculation correct?