Operating System - HP-UX
1833776 Members
2234 Online
110063 Solutions
New Discussion

I want to cancel cluster mechanism (HPUX 11.11)

 
cbozlagan
Regular Advisor

I want to cancel cluster mechanism (HPUX 11.11)

I want to cancel cluster mechanism (HPUX 11.11)

I have 2 HPUX system clustered. I want cancel the cluster mechanism.

I need some quick knowledge.

How can I do that?

Thanks
6 REPLIES 6
cbozlagan
Regular Advisor

Re: I want to cancel cluster mechanism (HPUX 11.11)

Our Cluster software is MC Serviceguard
Steven E. Protter
Exalted Contributor

Re: I want to cancel cluster mechanism (HPUX 11.11)

Shalom,

cmhaltnode

on each cluster node.

Then use swremove or swremove -i to remove the serviceguard software.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Matti_Kurkela
Honored Contributor

Re: I want to cancel cluster mechanism (HPUX 11.11)

"Cancel the cluster mechanism"? I'm not sure I understand what you mean.

-----------

To halt the cluster but allow it to be restarted in the future, "man cmhaltcl".

If you don't want the cluster restarted when the system boots, edit /etc/rc.config.d/cmcluster to set AUTOSTART_CMCLD=0.

The cluster can still be restarted manually with "cmruncl".

-----------

To permanently remove ServiceGuard from the systems, "man cmdeleteconf".

You might want to run "cmgetconf" to get up-to-date ASCII configuration files: they can be used to re-create the configuration if necessary.

Take backups if required.
Then halt the cluster (see above) and then run cmdeleteconf.
Then export the package volume groups (or use "vgchange -c n" to change them into non-cluster mode if you want to save them).

Do not try to mount the package volume groups on two systems simultaneously: that will certainly cause filesystem corruption and data loss.

After the cluster configuration is removed and the package volume groups are either removed or switched to non-cluster mode, you can use swremove to remove ServiceGuard.

MK
MK
cbozlagan
Regular Advisor

Re: I want to cancel cluster mechanism (HPUX 11.11)

Hi,
We decided to use one of HPUX for SAP Netweaver XI and one of HPUX for SAP Nerweaver Enterprise Portal.So we must cancel the cluster mechanism between two system.

Our cluster software is MC/MasterGuard

Thanks
cbozlagan
Regular Advisor

Re: I want to cancel cluster mechanism (HPUX 11.11)

Hi,
Thanks for your help

Now we decided not to uninstall mc/masterguard.Only stop and make non restartable the cluster mechanism. Maybe later we need to use mc/masterguard.

I want to brief what to do
--------------------------
1- cmhaltnode (for both system)
2- edit /etc/rc.config.d/cmcluster to set AUTOSTART_CMCLD=0 (for both system)

Is this the whole of what to do?

Thanks





cbozlagan
Regular Advisor

Re: I want to cancel cluster mechanism (HPUX 11.11)

Another question:

Both system has exported directory:
For example: /user/sap/trans

We can not delete trans directory

I give document below about the directories like this.

How can I change trans directory to removable situation?

Thanks


Document
-------------------------------
3.1.3 Configuring Network File System

If required, you configure Network File System (NFS), which is a system-wide Single Point-of-Failure
(SPOF), for a high-availability (HA) installation. For more information consult your HA partner.
We regard NFS as an extension to the operating system. The switchover product protects NFS and
makes it transparently available to the SAP system in switchover situations.
You need to decide:
n How to protect NFS
n Which switchover cluster nodes NFS is to run on
The NFS configuration might depend on your database system. The directories need to be available
for the SAP system before and after a switchover.
Procedure
1. Check the NFS directories, several of which need to be shared between all instances of a system.
These directories are:
n /sapmnt//profile
Contains the different profiles to simplify maintenance
n /sapmnt//global
Contains log files of batch jobs and central SysLog
n /usr/sap/trans
Contains data and log files for objects transported between different SAP Web AS systems (for
example, development â  integration). This transport directory ought to be accessible by at least
one AS instance of each system, but preferably by all.
n /sapmnt//exe
Contains the kernel executables. These executables ought to be accessible on all AS instances
locally without having to use NFS. The best solution is to store them locally on all AS instance
hosts.

3 Preparation
3.1 High Availability: Switchover Preparations
2. Since you can protect NFS by a switchover product, it makes sense to install it on a cluster node.
The requirements of your database system might dictate how NFS has to be set up. If required, you
can configure the NFS server on the cluster node of the CI or the DB.
In both cases the NFS clients use the virtual IP address to mount NFS. If the second node is used as
an additional SAP instance during normal operation (for example, as a dialog instance), it also
needs to mount the directories listed above from the primary node.
When exporting the directories with their original names, you might encounter the problem of
a â  busy NFS mountâ  on the standby node. You can use the following workaround to solve this
problem:
a) On the primary server, mount the disks containing the directories:
/export/usr/sap/trans
/export/sapmnt/
b) The primary server creates soft links to the directories with the original SAP names:
/usr/sap/trans â  > /export/usr/sap/trans
/sapmnt/ â  > /export/sapmnt/
Alternatively the primary server can also mount the directories:
/export/usr/sap/trans â  > /usr/sap/trans
/export/sapmnt/SID â  > /sapmnt/
c) The primary server exports:
/export/usr/sap/trans
/export/sapmnt/
d) The standby NFS mounts:
from virt.IP:/export/usr/sap/trans to /usr/sap/trans
from virt.IP:/export/sapmnt/ to /sapmnt/
If the primary node goes down and a switchover occurs, the following happens:
n These directories on the standby node become busy:
/usr/sap/trans
/sapmnt/
n The standby node mounts disks to:
/export/usr/sap/trans
/export/sapmnt/
n The standby node configures the virtual IP address virt.IP
n The standby node exports:
/export/usr/sap/trans
/export/sapmnt/
n These directories on the standby node are accessible again:
/usr/sap/trans
/sapmnt/