- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: cluster configuration with cluster suite
Operating System - Linux
1752777
Members
6165
Online
108789
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2010 09:52 AM
10-29-2010 09:52 AM
cluster configuration with cluster suite
Hi all,
I have a redhat cluster suite linux5.5 done with redhat suite manager.When the Node1 restarts the shared disks are mounted automaticaly.But when the node2 restarts, it can not mount the shared disks automaticaly.
I need some assistance please
I have a redhat cluster suite linux5.5 done with redhat suite manager.When the Node1 restarts the shared disks are mounted automaticaly.But when the node2 restarts, it can not mount the shared disks automaticaly.
I need some assistance please
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2010 02:57 AM
10-30-2010 02:57 AM
Re: cluster configuration with cluster suite
The cluster configuration you've attached contains no information at all about your shared disks. So my answer is going to be basically wild guesses. If I knew more about your disks, I might be able to give you a better answer.
Do you get any error messages when booting node2? What do they say? Please try to copy the exact message(s).
Are you using GFS or GFS2?
-------------
If you are, check at least these things:
- Compare the /etc/fstab files on both nodes. Is the line for the GFS filesystem configured the same way on both nodes?
- If your shared disks uses GFS or GFS2, you should define it as a clusterfs resource in your cluster configuration. This tells the cluster suite that your service requires the cluster filesystem, and so the cluster suite won't try to move the service to a node that has problems with that filesystem.
-------------
If you aren't using GFS or GFS2 (or some other cluster filesystem) on your shared disks, then you cannot mount them on two nodes simultaneously without causing data corruption.
If you're using LVM and have installed the cluster using Conga, then the cluster installation has switched your LVM to cluster mode: in this case, all the volume groups you've created after installing the cluster will be configured to cluster mode by default. In cluster mode, LVM will not allow you to activate cluster volume groups unless the clvmd daemon is running and connected with the clvmd daemons of the other running nodes. It should also stop any attempts to mount a filesystem on a cluster VG that is already mounted on another node, unless GFS or GFS2 filesystem is used.
In this case, you should define the shared disks as filesystem (fs) resource(s) in your cluster configuration, and add them to your service definition. You should remove the shared filesystems from /etc/fstab from both nodes... or at the very least, configure them using the "noauto" mount option and fdisk pass number "0" so the boot procedure won't try to check nor mount them.
After this change, checking & mounting the shared disks will be the responsibility of the cluster suite. The shared disks will be mounted only on the node that is actually running the service.
MK
Do you get any error messages when booting node2? What do they say? Please try to copy the exact message(s).
Are you using GFS or GFS2?
-------------
If you are, check at least these things:
- Compare the /etc/fstab files on both nodes. Is the line for the GFS filesystem configured the same way on both nodes?
- If your shared disks uses GFS or GFS2, you should define it as a clusterfs resource in your cluster configuration. This tells the cluster suite that your service requires the cluster filesystem, and so the cluster suite won't try to move the service to a node that has problems with that filesystem.
-------------
If you aren't using GFS or GFS2 (or some other cluster filesystem) on your shared disks, then you cannot mount them on two nodes simultaneously without causing data corruption.
If you're using LVM and have installed the cluster using Conga, then the cluster installation has switched your LVM to cluster mode: in this case, all the volume groups you've created after installing the cluster will be configured to cluster mode by default. In cluster mode, LVM will not allow you to activate cluster volume groups unless the clvmd daemon is running and connected with the clvmd daemons of the other running nodes. It should also stop any attempts to mount a filesystem on a cluster VG that is already mounted on another node, unless GFS or GFS2 filesystem is used.
In this case, you should define the shared disks as filesystem (fs) resource(s) in your cluster configuration, and add them to your service definition. You should remove the shared filesystems from /etc/fstab from both nodes... or at the very least, configure them using the "noauto" mount option and fdisk pass number "0" so the boot procedure won't try to check nor mount them.
After this change, checking & mounting the shared disks will be the responsibility of the cluster suite. The shared disks will be mounted only on the node that is actually running the service.
MK
MK
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP