- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Quorum disk removal
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-10-2010 08:36 AM
тАО03-10-2010 08:36 AM
i want to remove the quorum disk from one of our Linux box (Enterprise Linux Enterprise Linux Server release 5.1).I am attaching the current setup.Please find the attachment.Could you please let me know after stopping the pkg & cluster what i have to do .or what will be the procedure.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-10-2010 12:21 PM
тАО03-10-2010 12:21 PM
Re: Quorum disk removal
Note that RHEL 5.1 was not the most stable of releases. Actual functionality may vary. Going to 5, update 4 might get you better results.
If there is no quorum disk, the cluster might continue to function with fencing. Quorum disk is not a requirement.
If there is no valid fencing mechanism in place, your cluster would go off line and you might need to establish a new quorum disk before your application runs.
What is your fencing mechanism?
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-11-2010 07:37 AM
тАО03-11-2010 07:37 AM
Re: Quorum disk removal
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2010 06:15 AM
тАО03-12-2010 06:15 AM
SolutionOf course you should remove/obscure any IP addresses and fencing logins/passwords before publishing it.
Stopping the cluster is not necessary.
On any node, while the cluster is running, copy the current cluster.conf to a temporary location:
cp /etc/cluster/cluster.conf /tmp/cluster.conf
Then edit the /tmp/cluster.conf file:
- increment the cluster config_version
- as you have a two-node cluster, set the cman two_node parameter to 1 (=true)
- set the cman expected_votes parameter to 1
- remove the entire
Then run:
ccs_tool update /tmp/cluster.conf
This will update the cluster configuration on all cluster nodes.
After the update is successful, your qluster will no longer be using the quorum disk.
Then you can run
service qdiskd stop
chkconfig qdiskd off
on each node, to shutdown the now-unused qdiskd service.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2010 07:57 AM
тАО03-12-2010 07:57 AM
Re: Quorum disk removal
1.We have the outage window.
2.It's a two node cluster.( tneld53n.adr.deep.com & tneld52n.adr.deep.com)
3.I have attached the o/p of clustat & cluster.conf file of one server.
4.please correct me if i am wrong.
First i will make the changes on 52n.After making the changes to cluster.conf file i will reboot it & if its up then i will copy the cluster.conf file to the other node (53n).
currently 2 services are running on 52n
service:ebrdo41d tneld52n.adr.deep.com started
service:ecrde0q tneld52n.adr.deep.com started
So i will stop the services.
#clusvcadm ├в s ebrdo41d
#clusvcadm ├в s ecrde0q
As we have the outage window so i want to shutdown the cluster
# service rgmanager stop
# service clvmd stop
# service cman stop
# service qdiskd stop
Then i will take the backup
of /etc/cluster/cluster.conf
As per your advice i will do the following changes to cluster.conf
#vi /etc/cluster/cluster.conf [ as I already take the backup ]
---- increment the cluster config_version
[root@tneld52n ~]# grep config_version /etc/cluster/cluster.conf
[root@tneld52n ~]#
So i will make it 169
----as you have a two-node cluster, set the cman two_node parameter to 1 (=true)
--set the cman expected_votes parameter to 1
[root@tneld52n ~]# grep two /etc/cluster/cluster.conf
[root@tneld52n ~]#
So i will make two_node="1"
cman expected_votes="1"
- remove the entire
so i will delete the following lines
[root@tneld52n ~]# grep -i quo /etc/cluster/cluster.conf
[root@tneld52n ~]#
#ccs_tool update /etc/cluster/cluster.conf
As I already stop the cluster service. So just reboot the box. & if I see that it is up & cluster is running then I will copy the /etc/cluster/cluster.conf to other node .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2010 08:00 AM
тАО03-12-2010 08:00 AM
Re: Quorum disk removal
please find the attached Cluster.conf file in the above mail. & in this mail i am attaching the clustat o/p.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2010 10:00 AM
тАО03-12-2010 10:00 AM
Re: Quorum disk removal
After the edits, the cman line should look like this:
When you're removing the quorumd block from the configuration, you should remove a total of 3 lines:
i.e. the line with the keyword "heuristic" should be removed too.
Before rebooting the nodes, remember to run "chkconfig qdiskd off" on each node, so that the now-useless qdiskd will not be started again when the node starts up.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2010 11:34 AM
тАО03-12-2010 11:34 AM
Re: Quorum disk removal
#ccs_tool update /home/XXX/qdisk/cluster.conf.
Could you lease verify the two files. [/home/xxxx/qdisk/cluster.conf will be the file which i will place at the outage window]
[root@tneld52n qdisk]# diff /etc/cluster/cluster.conf /home/xxxx/qdisk/cluster.conf
2c2
<
---
>
24c24
<
---
>
146,148d145
<
<
<
[root@tneld52n qdisk]#
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2010 06:36 AM
тАО03-13-2010 06:36 AM
Re: Quorum disk removal
If you run
#ccs_tool update /home/XXX/qdisk/cluster.conf
it will implement the change immediately - in a single, synchronized, cluster-wide transaction. The new file is copied to all the nodes and the appropriate daemons notified of the change. If one of the nodes is unable to perform the configuration change, the change is automatically rolled back in all nodes.
The "ccs_tool update" command is not a test - it tells the cluster to actually make the configuration change.
If the heartbeat communication between the nodes works OK at the time of the change, I don't see why the removal of the quorum disk would interfere with the cluster services in any way.
In other words:
If you run the "ccs_tool update" command now and it's successful, the actual configuration change will be done without an outage, and you can use the full time of the outage for verifying that the cluster is rebootable in its new configuration, and other planned activities (if any).
If the "ccs_tool update" fails and does what it is documented to do at failure, no configuration change should happen and your cluster should keep running with its old configuration.
If there is a catastrophic failure for some reason, your boss will be unhappy because you did not wait for the scheduled outage...
When I was on a RedHat "Clustering and Storage" course, one of the standard exercises was to add a quorum disk to a running cluster. It worked. As I recall, the removal of the quorum disk was a simple reversal of the addition procedure.
In fact, the RedHat instructor told us that the use of the ccs_tool (or other configuration utilities) was actually preferable over editing the /etc/cluster/cluster.conf files manually. But the "ccs_tool update" only works when the cluster is running. The ccs_tool has other options which can be used to e.g. remove a node from the cluster.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-04-2010 09:58 PM
тАО04-04-2010 09:58 PM