1833729 Members
2657 Online
110063 Solutions
New Discussion

Re: Multi nodes cluster

 
SOLVED
Go to solution
Mad_1
Regular Advisor

Multi nodes cluster

I used to configure 2 nodes cluster. But there will be a evaluation on combining existing cluster to form multi nodes cluster. Most likely to form 4 nodes cluster. Therefore, I would like to ask the following few questions:

1. How to configure 4 nodes cluster? What is the common configuration of 4 nodes cluster?

2. My initial idea is - 3 packages running 3 nodes respectively, the 4th node is configured as the standby node for all first 3 packages. Is this OK?

3. How to configure / connect the cluster lock disk?

4. Any other consideration should be taken place?

Thanks in advanced.
20 REPLIES 20
John Poff
Honored Contributor
Solution

Re: Multi nodes cluster

Hi,

1. A four node cluster is just like your two node cluster with two extra nodes. You'll follow the same steps for adding nodes as you did when you built your original cluster.

2. Sounds good. It all depends on your applications and your nodes, so that is really up to you.

3. No cluster lock disk in a four node cluster. The cluster lock disk is required for a two node cluster, is optional for a three node cluster, and doesn't exist with four nodes or more. Also, the latest version of MC/SG, 11.14, brings in the concept of a quorum server, which is a separate box that handles the lock disk issue.

4. Just make sure your failover node can handle at least two of the other nodes being down and has enough power to run your applications.

Sounds like it should be fun to do!

JP
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,

All of what John said is good, except that a lock disk is not required, but still recommended for a 4-node cluster; it isn't supported for 5 or more nodes. However, the quorum server sounds like a possibility.

Make sure all shared package and/or lock disk(s) are visible to all nodes.

For your package.conf files, I would modify them in respect to the FAILOVER_POLICY; I would change it from CONFIGURED_NODE to MIN_PACKAGE_NODE and recompile it with cmcheckconf and cmapplyconf -P option.



Other than that, good luck and have fun!!!

Chris
It wasn't me!!!!
Byron Myers
Trusted Contributor

Re: Multi nodes cluster

Mad, Your idea sounds like it will work fine, but is there some reason why the three packages cannot run on one node? If they can, then all you need is two nodes, run two packages on one node, the other package on the second node. Each node will be a standby for its counterpart node.
If you can focus your eyes far and straight enough ahead of yourself, you can see the back of your head.
Mad_1
Regular Advisor

Re: Multi nodes cluster

Thanks for all your reply.

Byron, the 3 packages I mentioned just for an example. The real situation is each of the 1st 3 nodes are running their own packages (more than 1), the 4th node can be treated as the standby node for the 1st 3 nodes.

By the way, I would like to further my questions:

1. As only the 4th node is required to share disks with the 1st 3 nodes respectively. There is no requirement to share any disk within the 1st 3 nodes?

2. If lock disk is configured, it should be visible for all 4 nodes. How can it be connected and configured with all 4 nodes.

3. For 2 nodes cluster case, when polling failed with them, then which node get the lock will be take control of the cluster. But for 4 nodes (even 3 nodes) cluster, what situation will the lock disk take effective?

4. Is the quorum server should be one of the cluster nodes? And can anyone tell me more about quorum server?

Thank
John Poff
Honored Contributor

Re: Multi nodes cluster

Hi again,

1. Right. As long as the failover node can see the same disks as the primary node, you are fine. In your four node cluster, this means that your 4th node will need to see everything that the first three nodes can see.

2. It gets tough to do a lock disk with 4 nodes. Either you'll need to be on a SAN [I guess, I haven't worked on a SAN before] or you'll need to do the quorum server.

3. The lock disk takes effect when a node fails and there is an even number of nodes left. In your 4 node cluster, a failure of one node will leave three nodes, and they will hold an election. One node will win by a score of 2 to 1, so you will be ok. If you lost two nodes, however, the cluster would rely on the lock disk. The remaining nodes try to lock the lock disk. The winner reforms the cluster and the loser TOCs. I've run a three node cluster before and I've had that happen to me when I didn't want it, so it can be a real pain without a lock disk.

4. I've only read a little about the quorum server, but I think it has to be a completely separate machine from any of the nodes in the cluster.

Here is a snippet from the Quorum Server Release Notes:

http://docs.hp.com/hpux/onlinedocs/B8467-90001/B8467-90001.html


Use of the Quorum Server as the Cluster Lock

A quorum server can be used in clusters of any size. The quorum server process runs on a machine outside of the cluster for which it is providing quorum services. The quorum server listens to connection requests from the ServiceGuard nodes on a known port. The server maintains a special area in memory for each cluster, and when a node obtains the cluster lock, this area is marked so that other nodes will recognize the lock as "taken." If communications are lost between two equal-sized groups of nodes, the group that obtains the lock from the Quorum Server will take over the cluster and the other nodes will perform a TOC. Without a cluster lock, a failure of either group of nodes will cause the other group, and therefore the cluster, to halt. Note also that if the quorum server is not available during an attempt to access it, the cluster will halt.


It sounds like the quorum server is the way to go, as it eliminates the need for the lock disks, which can be a pain to configure when you have three or more nodes.

JP
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,


All of what John has said in answering second set ofyour questions has been excellent.

I just want to emphasize that for your environment ALL disks that you intend to share as packages, and even the lock disk, must be visible to all nodes in the cluster. You make the lock disk visible to all nodes the exact same way you do the package disks; by having it on an array that is capable of hosting more than two servers. High-availability disk arrays capable of handling a SG environment range from the XP-1024 (EMC Symmetrics, or equivilent) all the way down to the 12h autoraid. However, with the autoraid, you can have only a two-node cluster, which is my situation with one of my clusters. I have to migrate to my XP-512 if I want to change that.

A way of making the lock disk visible to all nodes is by making one of your package volume groups and one of its disks the cluster lock volume group and cluster lock disk.

Hope this helps

Chris
It wasn't me!!!!
Mad_1
Regular Advisor

Re: Multi nodes cluster

I still have some concern on the cluster lock disk. Is it possible to connect all 4 cluster nodes to the lock disk if the lock disk is only a F/W SCSI?

How is the configuration?
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,

I'm going to play devil's advocate here and say no.

I'm beginning to see possible problems in respect to what you have hardware-wise vs. what your plans are.

What kind of disk array do you have?? If you are using a 12h or some kind of JBOD, then you aren't going to be able to expand your cluster. You will need something larger, with more connections.

Let me know what you have.

Chris
It wasn't me!!!!
John Poff
Honored Contributor

Re: Multi nodes cluster

Hi,

Chris is right. Doing a lock disk in a four node cluster will be tough.

I've worked with a three node cluster before and we didn't have a lock disk because it was pretty tough to configure. A four node cluster would be even harder. Some of the other local wizards may have some ideas about how to do it and how to cable it up, but it seems pretty tricky to do with SCSI connections. That's why, if you are doing an evaluation and you can use the latest version of MC/SG [11.14], I would suggest trying out the quorum server. It sounds like a much better and easier way to handle it.

JP
Mad_1
Regular Advisor

Re: Multi nodes cluster

Currently, several 2 nodes clusters are formed in my environment. A sample configuration is listed below:

Cluster I
=========
Node 1 with package A (running critical and heavy application)
Node 2 with package B (running less-critical and light application
Shared HASS as cluster lock disk.
Shared Model 10 disk array as data disk.
Lan and serial heartbeat configured.

Cluster II
==========
Node 3 with package C (running critical and heavy application)
Node 4 with package D (running less-critical and light application
lock disk & heartbeat are the same as Cluster I.

Cluster III & IV are similar to the above.

As you can see in all clusters, there is a less-critical and light package running. Therefore, I would like to combine several critical packages to form a 3 or 4 nodes cluster. Then less servers are required after cluster re-structure.

As I mentioned previously, my concern is on the cluster lock disk and heartbeat configuration.

Thank
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,

John and anyone else may correct me on this, but I am certain that your current config isn't expandable in the area of your array hardware.

The Model 10 is only capable of connecting to up to two hosts, so there goes your lock disk. However, the lock disk is NOT REQUIRED, but RECOMMENDED, so you can look into the quorum server, or omit the lock disk altogether.

If your data array is the same then you will not be able to expand this as well because the shared disks MUST be visible to all nodes in your cluster. (If you can't hook up the arrays directly to all nodes, then your out).

The serial heartbeat is only supported with 2 node configurations, so you would have to go to a network heartbeat configuration, as a dedicated hb is only supported w/ 2nodes as well.

I apologize for not asking sooner what your config was as this is probably the most important aspect to a cluster and will limit you accordingly. Your issue is upgrading your disk components and the $$$$ involved (HA is expensive, isn't it??).

Good luck and have a merry Christmas.

Chris


It wasn't me!!!!
Mad_1
Regular Advisor

Re: Multi nodes cluster

Merry Christmas to all.

Chris

As I mentioned the new cluster configuration, the first 3 nodes run 3 primary packages respectively, the 4th node is the standby node for the first 3 nodes. Is it possible only the 4th node should view all shared disks, but not the first 3 nodes. As below:

|------| |------| |------|
| Node | | Node | | Node |
| A | | B | | C |
|------| |------| |------|
| | |
| | |
====== ====== ======
|disk| |disk| |disk|
| A | | B | | C |
====== ====== ======
| | |
| | |
| |------| |
|______| Node |_____|
| D |
|------|

That means Node A, B and C are not required to view all shared disk. Any node of A, B or C failed will trigger a package failover to node D. Then the Model 10 disk array could be re-used. Does MC/SG support this type of configuration.

Furthermore, is there any type of SCSI disk support to attach to multi nodes.

I studied the "Managing MC/SG", I found configure cluster is not an easy task. Only study the guide is not enough and many thing in the guide is not defined clearly.

Finally, I would like to say thanks to all of you for your help.











Mad_1
Regular Advisor

Re: Multi nodes cluster

Sorry the figure in my previous reply is distorted.

The figure is attached here.
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,

I see now ( I think ) what you are talking about after referencing the guide and where it seems possible.

The problem is that I have never done this before. When I went to the training, all that was ever discussed in respect to the shared disks was that they need to be visible to all nodes, although certain pages in chapter 2 suggest otherwise.

It seems possible, however, I don't have the resources (and I suspect you don't either) to test this. I apologize for not being any more help, but I also don't want to give you fatal advise.

I'll look around, but for now, good luck.

Chris
It wasn't me!!!!
avsrini
Trusted Contributor

Re: Multi nodes cluster

Hi Mad,
You figure is very clear. And this setup is very well possible.

1. First configure the cluster with all 4 nodes without lock VG. You can use
Quorum server if you have one. I installed the quorum server and it is really very easy to setup it.
You have to just edit a file
in /etc/cmcluster/qs_authfile and add the nodes's name or IP address to use this as quorum server.

2. Configure VG's on Node A,B,C and vgimport on Node D.

3. Configure Packages to fallback to Node D.

4. If possible use multiple heartbeat links to the cluster.

If you need further help keep posting.

Happy clustering.

Srini
Be on top.
Mad_1
Regular Advisor

Re: Multi nodes cluster

Thank you all! I've just finished my Christmas vacation. How did you spend your time in Christmas? Wish you all have a wonderful and fruitful new year.

After several rounds of questions and answer, I have a clearly view of multi nodes cluster.

I would like to ask 2 more questions:

1. Should the quarum server be the one of the cluster node? Or it must not be any one of the cluster node.

2. As I understand, cluster can only be started when 50% or more cluster nodes are alive and can communicate. Take a 3 nodes cluster as example, if one of the cluster node failed, then the other 2 nodes will take control and form a 2 nodes cluster. The question is ---- After this 2 nodes cluster formed successfully, if there is one more node failed (assume lock disk is configured), will there be single node cluster taking control (50% of 2 nodes cluster)? Or the whole cluster will fail and cannot be restarted (only one node survive, this is only 33.33% of the original 3 nodes cluster)?

Thanks


John Poff
Honored Contributor

Re: Multi nodes cluster

Hi Mad,

We had a great Christmas!

1. From what I remember reading about the quorum server, it has to be on a system that is not one of the nodes in the cluster, which kind of makes sense. The quorum server needs to be able to break the tie during an election if the cluster is reforming, and it wouldn't work if it was running on a node that failed.

2. When the cluster starts it expects to have communications with all the nodes, and the cluster won't start until the communication is successful to all nodes. You can force the cluster up on just one node. I have done that before when we were working on systems and I wanted to get things started. I think the 'cmruncl -f nodename' will start the cluster on a node without having all the other nodes up.

JP
Mad_1
Regular Advisor

Re: Multi nodes cluster

Most of my concern is the cluster lock disk.

Does anyone has experience configured lock disk in multi nodes cluster.

1. Which types of disk can be configured as cluster lock disk within multi nodes cluster?

2. How can the lock disk be configured / connected within the cluster, which make all nodes can view the lock disk?

Thanks
avsrini
Trusted Contributor

Re: Multi nodes cluster

Hi Mad,

1. Quorum server has to be a separate server other than cluster nodes. If you have even any workstations running
you can use it as quorum server.

2. Regarding Lock disk.
If you have a HA Disk Array, We have XP256, which is connected to all
the nodes, you can configure one single LUN on all the nodes and use it as lock disk.
If you dont have HA box, then try JBOD, a cheaper option. But i feel, you can
connect only 2 nodes to a JBOD.
If anybody knows how to connect 3 nodes to JBOD, pls help us.

When you run the cmquerycl command, when you initially
configure the cluster, this
will give you which disk you
can use as Lock disk.

3. ON 3 nodes cluster, if 2 nodes are down, i think the cluster will halt. You can't run with only one node.

Happy clustering.

Srini.
Be on top.
Christopher McCray_1
Honored Contributor

Re: Multi nodes cluster

Hello,

1. Forget the lock disk; it is apparent you don't have the right kind of array(s). Only arrays with multiple port connects (fibre channel) can support the lock disk with 3/4-node clusters (5 or more nodes the lock disk is unsupported). Go with the quorum server, which is free:

http://www.software.hp.com/cgi-bin/swdepot_parser.cgi/cgi/displayProductInfo.pl?productNumber=B8467BA

2. Good luck and, by all means, have fun!!!

Chris
It wasn't me!!!!