1753769 Members
5324 Online
54796 Solutions
New Discussion юеВ

Re: Multi nodes cluster

 
SOLVED
Go to solution
Mad_1
Regular Advisor

Multi nodes cluster

I used to configure 2 nodes cluster. But there will be a evaluation on combining existing cluster to form multi nodes cluster. Most likely to form 4 nodes cluster. Therefore, I would like to ask the following few questions:

1. How to configure 4 nodes cluster? What is the common configuration of 4 nodes cluster?

2. My initial idea is - 3 packages running 3 nodes respectively, the 4th node is configured as the standby node for all first 3 packages. Is this OK?

3. How to configure / connect the cluster lock disk?

4. Any other consideration should be taken place?

Thanks in advanced.
20 REPLIES 20
John Poff
Honored Contributor
Solution

Re: Multi nodes cluster

Hi,

1. A four node cluster is just like your two node cluster with two extra nodes. You'll follow the same steps for adding nodes as you did when you built your original cluster.

2. Sounds good. It all depends on your applications and your nodes, so that is really up to you.

3. No cluster lock disk in a four node cluster. The cluster lock disk is required for a two node cluster, is optional for a three node cluster, and doesn't exist with four nodes or more. Also, the latest version of MC/SG, 11.14, brings in the concept of a quorum server, which is a separate box that handles the lock disk issue.

4. Just make sure your failover node can handle at least two of the other nodes being down and has enough power to run your applications.

Sounds like it should be fun to do!

JP
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,

All of what John said is good, except that a lock disk is not required, but still recommended for a 4-node cluster; it isn't supported for 5 or more nodes. However, the quorum server sounds like a possibility.

Make sure all shared package and/or lock disk(s) are visible to all nodes.

For your package.conf files, I would modify them in respect to the FAILOVER_POLICY; I would change it from CONFIGURED_NODE to MIN_PACKAGE_NODE and recompile it with cmcheckconf and cmapplyconf -P option.



Other than that, good luck and have fun!!!

Chris
It wasn't me!!!!
Byron Myers
Trusted Contributor

Re: Multi nodes cluster

Mad, Your idea sounds like it will work fine, but is there some reason why the three packages cannot run on one node? If they can, then all you need is two nodes, run two packages on one node, the other package on the second node. Each node will be a standby for its counterpart node.
If you can focus your eyes far and straight enough ahead of yourself, you can see the back of your head.
Mad_1
Regular Advisor

Re: Multi nodes cluster

Thanks for all your reply.

Byron, the 3 packages I mentioned just for an example. The real situation is each of the 1st 3 nodes are running their own packages (more than 1), the 4th node can be treated as the standby node for the 1st 3 nodes.

By the way, I would like to further my questions:

1. As only the 4th node is required to share disks with the 1st 3 nodes respectively. There is no requirement to share any disk within the 1st 3 nodes?

2. If lock disk is configured, it should be visible for all 4 nodes. How can it be connected and configured with all 4 nodes.

3. For 2 nodes cluster case, when polling failed with them, then which node get the lock will be take control of the cluster. But for 4 nodes (even 3 nodes) cluster, what situation will the lock disk take effective?

4. Is the quorum server should be one of the cluster nodes? And can anyone tell me more about quorum server?

Thank
John Poff
Honored Contributor

Re: Multi nodes cluster

Hi again,

1. Right. As long as the failover node can see the same disks as the primary node, you are fine. In your four node cluster, this means that your 4th node will need to see everything that the first three nodes can see.

2. It gets tough to do a lock disk with 4 nodes. Either you'll need to be on a SAN [I guess, I haven't worked on a SAN before] or you'll need to do the quorum server.

3. The lock disk takes effect when a node fails and there is an even number of nodes left. In your 4 node cluster, a failure of one node will leave three nodes, and they will hold an election. One node will win by a score of 2 to 1, so you will be ok. If you lost two nodes, however, the cluster would rely on the lock disk. The remaining nodes try to lock the lock disk. The winner reforms the cluster and the loser TOCs. I've run a three node cluster before and I've had that happen to me when I didn't want it, so it can be a real pain without a lock disk.

4. I've only read a little about the quorum server, but I think it has to be a completely separate machine from any of the nodes in the cluster.

Here is a snippet from the Quorum Server Release Notes:

http://docs.hp.com/hpux/onlinedocs/B8467-90001/B8467-90001.html


Use of the Quorum Server as the Cluster Lock

A quorum server can be used in clusters of any size. The quorum server process runs on a machine outside of the cluster for which it is providing quorum services. The quorum server listens to connection requests from the ServiceGuard nodes on a known port. The server maintains a special area in memory for each cluster, and when a node obtains the cluster lock, this area is marked so that other nodes will recognize the lock as "taken." If communications are lost between two equal-sized groups of nodes, the group that obtains the lock from the Quorum Server will take over the cluster and the other nodes will perform a TOC. Without a cluster lock, a failure of either group of nodes will cause the other group, and therefore the cluster, to halt. Note also that if the quorum server is not available during an attempt to access it, the cluster will halt.


It sounds like the quorum server is the way to go, as it eliminates the need for the lock disks, which can be a pain to configure when you have three or more nodes.

JP
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,


All of what John has said in answering second set ofyour questions has been excellent.

I just want to emphasize that for your environment ALL disks that you intend to share as packages, and even the lock disk, must be visible to all nodes in the cluster. You make the lock disk visible to all nodes the exact same way you do the package disks; by having it on an array that is capable of hosting more than two servers. High-availability disk arrays capable of handling a SG environment range from the XP-1024 (EMC Symmetrics, or equivilent) all the way down to the 12h autoraid. However, with the autoraid, you can have only a two-node cluster, which is my situation with one of my clusters. I have to migrate to my XP-512 if I want to change that.

A way of making the lock disk visible to all nodes is by making one of your package volume groups and one of its disks the cluster lock volume group and cluster lock disk.

Hope this helps

Chris
It wasn't me!!!!
Mad_1
Regular Advisor

Re: Multi nodes cluster

I still have some concern on the cluster lock disk. Is it possible to connect all 4 cluster nodes to the lock disk if the lock disk is only a F/W SCSI?

How is the configuration?
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,

I'm going to play devil's advocate here and say no.

I'm beginning to see possible problems in respect to what you have hardware-wise vs. what your plans are.

What kind of disk array do you have?? If you are using a 12h or some kind of JBOD, then you aren't going to be able to expand your cluster. You will need something larger, with more connections.

Let me know what you have.

Chris
It wasn't me!!!!
John Poff
Honored Contributor

Re: Multi nodes cluster

Hi,

Chris is right. Doing a lock disk in a four node cluster will be tough.

I've worked with a three node cluster before and we didn't have a lock disk because it was pretty tough to configure. A four node cluster would be even harder. Some of the other local wizards may have some ideas about how to do it and how to cable it up, but it seems pretty tricky to do with SCSI connections. That's why, if you are doing an evaluation and you can use the latest version of MC/SG [11.14], I would suggest trying out the quorum server. It sounds like a much better and easier way to handle it.

JP