- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- I want to cancel cluster mechanism (HPUX 11.11)
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2007 07:19 PM
06-06-2007 07:19 PM
I want to cancel cluster mechanism (HPUX 11.11)
I have 2 HPUX system clustered. I want cancel the cluster mechanism.
I need some quick knowledge.
How can I do that?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2007 07:45 PM
06-06-2007 07:45 PM
Re: I want to cancel cluster mechanism (HPUX 11.11)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2007 08:01 PM
06-06-2007 08:01 PM
Re: I want to cancel cluster mechanism (HPUX 11.11)
cmhaltnode
on each cluster node.
Then use swremove or swremove -i to remove the serviceguard software.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2007 08:10 PM
06-06-2007 08:10 PM
Re: I want to cancel cluster mechanism (HPUX 11.11)
-----------
To halt the cluster but allow it to be restarted in the future, "man cmhaltcl".
If you don't want the cluster restarted when the system boots, edit /etc/rc.config.d/cmcluster to set AUTOSTART_CMCLD=0.
The cluster can still be restarted manually with "cmruncl".
-----------
To permanently remove ServiceGuard from the systems, "man cmdeleteconf".
You might want to run "cmgetconf" to get up-to-date ASCII configuration files: they can be used to re-create the configuration if necessary.
Take backups if required.
Then halt the cluster (see above) and then run cmdeleteconf.
Then export the package volume groups (or use "vgchange -c n" to change them into non-cluster mode if you want to save them).
Do not try to mount the package volume groups on two systems simultaneously: that will certainly cause filesystem corruption and data loss.
After the cluster configuration is removed and the package volume groups are either removed or switched to non-cluster mode, you can use swremove to remove ServiceGuard.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2007 08:14 PM
06-06-2007 08:14 PM
Re: I want to cancel cluster mechanism (HPUX 11.11)
We decided to use one of HPUX for SAP Netweaver XI and one of HPUX for SAP Nerweaver Enterprise Portal.So we must cancel the cluster mechanism between two system.
Our cluster software is MC/MasterGuard
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2007 08:33 PM
06-06-2007 08:33 PM
Re: I want to cancel cluster mechanism (HPUX 11.11)
Thanks for your help
Now we decided not to uninstall mc/masterguard.Only stop and make non restartable the cluster mechanism. Maybe later we need to use mc/masterguard.
I want to brief what to do
--------------------------
1- cmhaltnode (for both system)
2- edit /etc/rc.config.d/cmcluster to set AUTOSTART_CMCLD=0 (for both system)
Is this the whole of what to do?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-06-2007 09:01 PM
06-06-2007 09:01 PM
Re: I want to cancel cluster mechanism (HPUX 11.11)
Both system has exported directory:
For example: /user/sap/trans
We can not delete trans directory
I give document below about the directories like this.
How can I change trans directory to removable situation?
Thanks
Document
-------------------------------
3.1.3 Configuring Network File System
If required, you configure Network File System (NFS), which is a system-wide Single Point-of-Failure
(SPOF), for a high-availability (HA) installation. For more information consult your HA partner.
We regard NFS as an extension to the operating system. The switchover product protects NFS and
makes it transparently available to the SAP system in switchover situations.
You need to decide:
n How to protect NFS
n Which switchover cluster nodes NFS is to run on
The NFS configuration might depend on your database system. The directories need to be available
for the SAP system before and after a switchover.
Procedure
1. Check the NFS directories, several of which need to be shared between all instances of a system.
These directories are:
n /sapmnt/
Contains the different profiles to simplify maintenance
n /sapmnt/
Contains log files of batch jobs and central SysLog
n /usr/sap/trans
Contains data and log files for objects transported between different SAP Web AS systems (for
example, development â  integration). This transport directory ought to be accessible by at least
one AS instance of each system, but preferably by all.
n /sapmnt/
Contains the kernel executables. These executables ought to be accessible on all AS instances
locally without having to use NFS. The best solution is to store them locally on all AS instance
hosts.
3 Preparation
3.1 High Availability: Switchover Preparations
2. Since you can protect NFS by a switchover product, it makes sense to install it on a cluster node.
The requirements of your database system might dictate how NFS has to be set up. If required, you
can configure the NFS server on the cluster node of the CI or the DB.
In both cases the NFS clients use the virtual IP address to mount NFS. If the second node is used as
an additional SAP instance during normal operation (for example, as a dialog instance), it also
needs to mount the directories listed above from the primary node.
When exporting the directories with their original names, you might encounter the problem of
a â  busy NFS mountâ  on the standby node. You can use the following workaround to solve this
problem:
a) On the primary server, mount the disks containing the directories:
/export/usr/sap/trans
/export/sapmnt/
b) The primary server creates soft links to the directories with the original SAP names:
/usr/sap/trans â  > /export/usr/sap/trans
/sapmnt/
Alternatively the primary server can also mount the directories:
/export/usr/sap/trans â  > /usr/sap/trans
/export/sapmnt/SID â  > /sapmnt/
c) The primary server exports:
/export/usr/sap/trans
/export/sapmnt/
d) The standby NFS mounts:
from virt.IP:/export/usr/sap/trans to /usr/sap/trans
from virt.IP:/export/sapmnt/
If the primary node goes down and a switchover occurs, the following happens:
n These directories on the standby node become busy:
/usr/sap/trans
/sapmnt/
n The standby node mounts disks to:
/export/usr/sap/trans
/export/sapmnt/
n The standby node configures the virtual IP address virt.IP
n The standby node exports:
/export/usr/sap/trans
/export/sapmnt/
n These directories on the standby node are accessible again:
/usr/sap/trans
/sapmnt/