HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- qdisk using in redhat 2,3,4 node clusters.
Operating System - Linux
1825859
Members
3140
Online
109689
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-28-2010 06:11 AM
03-28-2010 06:11 AM
qdisk using in redhat 2,3,4 node clusters.
Hi,
I am setting up a 2node, 3 node and 4 node clusters in RHEL 5.3 with cluster suit. I use quorum disk here and can any one explain me well about the below parameters as how to defined with an examples as a 3 node cluster.
1. ============> In this min_score and votes.
2. =============> score
3.cman expected_votes.
I read a lots of links and I could make up the clusters but the basics are yet to cleared. Thanks in advance for your support.
I am setting up a 2node, 3 node and 4 node clusters in RHEL 5.3 with cluster suit. I use quorum disk here and can any one explain me well about the below parameters as how to defined with an examples as a 3 node cluster.
1.
2.
3.cman expected_votes.
I read a lots of links and I could make up the clusters but the basics are yet to cleared. Thanks in advance for your support.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2010 06:25 AM
03-29-2010 06:25 AM
Re: qdisk using in redhat 2,3,4 node clusters.
Shalom,
3.cman expected_votes.
This can let you weight certain services to run on a particular node. Normally you weight all nodes equally, but there are valid reasons not to do this, such as making services default to more powerful servers.
SEP
3.cman expected_votes.
This can let you weight certain services to run on a particular node. Normally you weight all nodes equally, but there are valid reasons not to do this, such as making services default to more powerful servers.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2010 06:49 AM
03-29-2010 06:49 AM
Re: qdisk using in redhat 2,3,4 node clusters.
SEP:
"cman expected_votes" is a cluster-wide attribute, not a node or service attribute. You're thinking about the "priority" attribute in the failoverdomainnode tag.
AVV:
Please read the qdisk man page (man 5 qdisk) on your systems. It has a detailed explanation of quorumd parameters and heuristics, with example configurations for 2-node and 3-node clusters.
The quorumd heuristics score is calculated locally by each node. It's the sum total of all successful heuristics' scores on that node. If the score is equal or greater than min_score, the node is considered to be "alive". If the node's score is less than min_score, that means the node is faulty and will remove itself from the cluster (= qdiskd will reboot the node).
The qdiskd processes on all "alive" nodes will then communicate with each other using the quorum disk. The node with the lowest node ID (among the "alive" nodes) will be chosen as the master qdiskd. This master node will then grant extra vote(s) for the CMAN cluster quorum calculation. If the node with the master qdiskd dies, a new master is elected.
If one of the nodes does not update its status block on the quorum disk at the configured rate, the master qdiskd can throw it out of the cluster. When this happens, the master qdiskd will both send the node an eviction message through the quorum disk ("Node X, your qdisk updates are late - reboot yourself") and fence it out of the cluster.
CMAN cluster quorum is determined by votes. Each running node will grant (at least) 1 vote for the cluster quorum, and the master qdiskd of the quorum disk system can grant extra votes. The "cman expected_votes" parameter is the number of votes expected when everything is OK.
If the network connections between the cluster nodes fail, the cluster can become partitioned in two or more parts. In these situations, only one of the partitions must continue running: the others must stop because they won't be aware of what the rest of the cluster is doing, and might violate LVM or GFS locks, causing data corruption.
If a cluster becomes partitioned, only the partition that has _more than 50%_ of expected_votes will continue. The nodes in the smaller partition will be "inquorate" (=without quorum) and can only stop all cluster services and wait until the quorate part of the cluster fences them out.
MK
"cman expected_votes" is a cluster-wide attribute, not a node or service attribute. You're thinking about the "priority" attribute in the failoverdomainnode tag.
AVV:
Please read the qdisk man page (man 5 qdisk) on your systems. It has a detailed explanation of quorumd parameters and heuristics, with example configurations for 2-node and 3-node clusters.
The quorumd heuristics score is calculated locally by each node. It's the sum total of all successful heuristics' scores on that node. If the score is equal or greater than min_score, the node is considered to be "alive". If the node's score is less than min_score, that means the node is faulty and will remove itself from the cluster (= qdiskd will reboot the node).
The qdiskd processes on all "alive" nodes will then communicate with each other using the quorum disk. The node with the lowest node ID (among the "alive" nodes) will be chosen as the master qdiskd. This master node will then grant extra vote(s) for the CMAN cluster quorum calculation. If the node with the master qdiskd dies, a new master is elected.
If one of the nodes does not update its status block on the quorum disk at the configured rate, the master qdiskd can throw it out of the cluster. When this happens, the master qdiskd will both send the node an eviction message through the quorum disk ("Node X, your qdisk updates are late - reboot yourself") and fence it out of the cluster.
CMAN cluster quorum is determined by votes. Each running node will grant (at least) 1 vote for the cluster quorum, and the master qdiskd of the quorum disk system can grant extra votes. The "cman expected_votes" parameter is the number of votes expected when everything is OK.
If the network connections between the cluster nodes fail, the cluster can become partitioned in two or more parts. In these situations, only one of the partitions must continue running: the others must stop because they won't be aware of what the rest of the cluster is doing, and might violate LVM or GFS locks, causing data corruption.
If a cluster becomes partitioned, only the partition that has _more than 50%_ of expected_votes will continue. The nodes in the smaller partition will be "inquorate" (=without quorum) and can only stop all cluster services and wait until the quorate part of the cluster fences them out.
MK
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-31-2010 08:15 AM
03-31-2010 08:15 AM
Re: qdisk using in redhat 2,3,4 node clusters.
Thanks for the info.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP