- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Multi nodes cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-19-2002 08:58 PM
тАО12-19-2002 08:58 PM
Re: Multi nodes cluster
Cluster I
=========
Node 1 with package A (running critical and heavy application)
Node 2 with package B (running less-critical and light application
Shared HASS as cluster lock disk.
Shared Model 10 disk array as data disk.
Lan and serial heartbeat configured.
Cluster II
==========
Node 3 with package C (running critical and heavy application)
Node 4 with package D (running less-critical and light application
lock disk & heartbeat are the same as Cluster I.
Cluster III & IV are similar to the above.
As you can see in all clusters, there is a less-critical and light package running. Therefore, I would like to combine several critical packages to form a 3 or 4 nodes cluster. Then less servers are required after cluster re-structure.
As I mentioned previously, my concern is on the cluster lock disk and heartbeat configuration.
Thank
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-20-2002 07:44 AM
тАО12-20-2002 07:44 AM
Re: Multi nodes cluster
John and anyone else may correct me on this, but I am certain that your current config isn't expandable in the area of your array hardware.
The Model 10 is only capable of connecting to up to two hosts, so there goes your lock disk. However, the lock disk is NOT REQUIRED, but RECOMMENDED, so you can look into the quorum server, or omit the lock disk altogether.
If your data array is the same then you will not be able to expand this as well because the shared disks MUST be visible to all nodes in your cluster. (If you can't hook up the arrays directly to all nodes, then your out).
The serial heartbeat is only supported with 2 node configurations, so you would have to go to a network heartbeat configuration, as a dedicated hb is only supported w/ 2nodes as well.
I apologize for not asking sooner what your config was as this is probably the most important aspect to a cluster and will limit you accordingly. Your issue is upgrading your disk components and the $$$$ involved (HA is expensive, isn't it??).
Good luck and have a merry Christmas.
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-21-2002 05:52 AM
тАО12-21-2002 05:52 AM
Re: Multi nodes cluster
Chris
As I mentioned the new cluster configuration, the first 3 nodes run 3 primary packages respectively, the 4th node is the standby node for the first 3 nodes. Is it possible only the 4th node should view all shared disks, but not the first 3 nodes. As below:
|------| |------| |------|
| Node | | Node | | Node |
| A | | B | | C |
|------| |------| |------|
| | |
| | |
====== ====== ======
|disk| |disk| |disk|
| A | | B | | C |
====== ====== ======
| | |
| | |
| |------| |
|______| Node |_____|
| D |
|------|
That means Node A, B and C are not required to view all shared disk. Any node of A, B or C failed will trigger a package failover to node D. Then the Model 10 disk array could be re-used. Does MC/SG support this type of configuration.
Furthermore, is there any type of SCSI disk support to attach to multi nodes.
I studied the "Managing MC/SG", I found configure cluster is not an easy task. Only study the guide is not enough and many thing in the guide is not defined clearly.
Finally, I would like to say thanks to all of you for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-21-2002 05:58 AM
тАО12-21-2002 05:58 AM
Re: Multi nodes cluster
The figure is attached here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-23-2002 06:25 AM
тАО12-23-2002 06:25 AM
Re: Multi nodes cluster
I see now ( I think ) what you are talking about after referencing the guide and where it seems possible.
The problem is that I have never done this before. When I went to the training, all that was ever discussed in respect to the shared disks was that they need to be visible to all nodes, although certain pages in chapter 2 suggest otherwise.
It seems possible, however, I don't have the resources (and I suspect you don't either) to test this. I apologize for not being any more help, but I also don't want to give you fatal advise.
I'll look around, but for now, good luck.
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-23-2002 04:18 PM
тАО12-23-2002 04:18 PM
Re: Multi nodes cluster
You figure is very clear. And this setup is very well possible.
1. First configure the cluster with all 4 nodes without lock VG. You can use
Quorum server if you have one. I installed the quorum server and it is really very easy to setup it.
You have to just edit a file
in /etc/cmcluster/qs_authfile and add the nodes's name or IP address to use this as quorum server.
2. Configure VG's on Node A,B,C and vgimport on Node D.
3. Configure Packages to fallback to Node D.
4. If possible use multiple heartbeat links to the cluster.
If you need further help keep posting.
Happy clustering.
Srini
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-27-2002 02:02 AM
тАО12-27-2002 02:02 AM
Re: Multi nodes cluster
After several rounds of questions and answer, I have a clearly view of multi nodes cluster.
I would like to ask 2 more questions:
1. Should the quarum server be the one of the cluster node? Or it must not be any one of the cluster node.
2. As I understand, cluster can only be started when 50% or more cluster nodes are alive and can communicate. Take a 3 nodes cluster as example, if one of the cluster node failed, then the other 2 nodes will take control and form a 2 nodes cluster. The question is ---- After this 2 nodes cluster formed successfully, if there is one more node failed (assume lock disk is configured), will there be single node cluster taking control (50% of 2 nodes cluster)? Or the whole cluster will fail and cannot be restarted (only one node survive, this is only 33.33% of the original 3 nodes cluster)?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-27-2002 05:26 AM
тАО12-27-2002 05:26 AM
Re: Multi nodes cluster
We had a great Christmas!
1. From what I remember reading about the quorum server, it has to be on a system that is not one of the nodes in the cluster, which kind of makes sense. The quorum server needs to be able to break the tie during an election if the cluster is reforming, and it wouldn't work if it was running on a node that failed.
2. When the cluster starts it expects to have communications with all the nodes, and the cluster won't start until the communication is successful to all nodes. You can force the cluster up on just one node. I have done that before when we were working on systems and I wanted to get things started. I think the 'cmruncl -f nodename' will start the cluster on a node without having all the other nodes up.
JP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-28-2002 11:57 PM
тАО12-28-2002 11:57 PM
Re: Multi nodes cluster
Does anyone has experience configured lock disk in multi nodes cluster.
1. Which types of disk can be configured as cluster lock disk within multi nodes cluster?
2. How can the lock disk be configured / connected within the cluster, which make all nodes can view the lock disk?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-29-2002 03:47 PM
тАО12-29-2002 03:47 PM
Re: Multi nodes cluster
1. Quorum server has to be a separate server other than cluster nodes. If you have even any workstations running
you can use it as quorum server.
2. Regarding Lock disk.
If you have a HA Disk Array, We have XP256, which is connected to all
the nodes, you can configure one single LUN on all the nodes and use it as lock disk.
If you dont have HA box, then try JBOD, a cheaper option. But i feel, you can
connect only 2 nodes to a JBOD.
If anybody knows how to connect 3 nodes to JBOD, pls help us.
When you run the cmquerycl command, when you initially
configure the cluster, this
will give you which disk you
can use as Lock disk.
3. ON 3 nodes cluster, if 2 nodes are down, i think the cluster will halt. You can't run with only one node.
Happy clustering.
Srini.