- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Cluster node names change
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2011 07:13 AM
03-13-2011 07:13 AM
I was asked to change the hostnames of clusternodes and also cluster name . I used the below procedure for the same.
OS HP-UX 11iv3 HA-OE
# swlist | grep -i serviceguard
B5140BA A.11.31.06 Serviceguard NFS Toolkit
T8687DB A.03.00 HP Serviceguard Cluster File System for RAC with HAOE
PHSS_41535 1.0 Serviceguard A.11.19.00
SGManagerPI B.02.00 HP Serviceguard Manager
SGWBEMProviders A.03.00.00 HP Serviceguard WBEM Providers SD Product
ServiceGuard A.11.19.00 Serviceguard SD Product
1)Stopped cluster.
2)Disabled the autostartup of cluster in /etc/rc.config.d/cmcluster
3) Renamed both hosts by set_parms hostname and rebooted.
4) Edited the /etc/hosts, /etc/hosts.eqiv, /.rhosts, /etc/cmcluster/cmclnodelist
5) Edited the /etc/cmcluster/config.ascii file with new cluster name and nodenames.
6)used cmcheckconf command to check the config file. Then it returned with the below error.
"cmcheckconf: Unable to retrive the existing cluster name
"
7)I removed the ascii file and renamed cmclconfig file and then created the config.ascii file by using cmquerycl.
8)I am not able to see any Multinode package after this. How Can I retrieve the same?
# cmviewcl
CLUSTER STATUS
hq-nqatch up
NODE STATUS STATE
hq-nqatch1 up running
hq-nqatch2 up running
MULTI_NODE_PACKAGES
PACKAGE STATUS STATE AUTO_RUN SYSTEM
SG-CFS-pkg changing changing enabled yes
SG-CFS-DG-1 down halted enabled no
SG-CFS-MP-1 down halted enabled no
SG-CFS-DG-2 down halted enabled no
SG-CFS-MP-2 down halted enabled no
SG-CFS-DG-3 down halted enabled no
SG-CFS-MP-3 down halted enabled no
SG-CFS-DG-4 down halted enabled no
SG-CFS-MP-4 down halted enabled no
SG-CFS-DG-5 down halted enabled no
SG-CFS-MP-5 down halted enabled no
After re-configuration
# cmviewcl
CLUSTER STATUS
hq-qtch down
NODE STATUS STATE
hq-qtch1 down unknown
hq-qtch2 down unknown
Please help
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2011 08:51 AM
03-13-2011 08:51 AM
SolutionIn other words, you completely removed the old cluster configuration. You could have had the same effect with the cmdeleteconf command.
Now you must use cmapplyconf to re-apply all package ASCII configuration files too. But before applying the package configuration files, read them to see if they include any node names; change all node names in the package ASCII files to match the new configuration.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2011 09:39 AM
03-13-2011 09:39 AM
Re: Cluster node names change
I understand that it is an oracle active active cluster with CFS. Does a #cmapplyconf -C
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2011 10:45 AM
03-13-2011 10:45 AM
Re: Cluster node names change
Your steps seemed to be somewhat complecated.
I could tell you the simple steps may give you clear
Halt the cluster.
Change the node name in all.
Delecte the existed cluster conf file.
Create a new cluster conf template.
Check and apply the conf file.
Start the node and cluster
Modify the application conf file to reflect the changed node name.
Restart the package to get the effect of modification.
Rgds...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2011 11:15 AM
03-13-2011 11:15 AM
Re: Cluster node names change
I also followed the same steps as you mentioned except the last steps for package. Here in my setup the package is for mounting the cfs filesystems. There are 5 diskgroups and 5 mount points. I just have to mount these filesystems on both nodes. No application packages are configured. here I can see one SG-CFS-pkg.conf file and one SG-CFS-pkg.sh file as attached. Whether running the below command again will do the needful to associate / start the multinode packages with the starting of cluster.
#cmapplyconf -C config.ascii -P SG-CFS-pkg.conf
please advise...
Thanks in advance..
Pramod
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2011 10:23 PM
03-13-2011 10:23 PM
Re: Cluster node names change
After cmcpplyconf -P SG-CFS-pkg.conf the Multinode package was associated with the cluster. But there is no Diskgroups and Mount points are activated.
# cmviewcl
CLUSTER STATUS
hq-qtch up
NODE STATUS STATE
hq-qtch1 up running
hq-qtch2 up running
MULTI_NODE_PACKAGES
PACKAGE STATUS STATE AUTO_RUN SYSTEM
SG-CFS-pkg up running enabled yes
As shown in my first thread it should show all the DG & MP
What has to do for enabling the same?
Thanks..
Pramod
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2011 10:36 PM
03-13-2011 10:36 PM
Re: Cluster node names change
I can see from the vxdisk list output that, te disk name & diskgroup name are missing.
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:LVM - - LVM
c3t0d1 auto:LVM - - LVM
c3t0d2 auto:cdsdisk - - online shared
c3t0d3 auto:cdsdisk - - online shared
c3t0d4 auto:cdsdisk - - online shared
c3t0d5 auto:cdsdisk - - online shared
c3t0d6 auto:cdsdisk - - online shared
c3t0d7 auto:LVM - - LVM
#
Why this happend?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2011 11:45 AM
03-15-2011 11:45 AM
Re: Cluster node names change
Here is an update against this issue.
The CFS Cluster is ready and working after com[pleting the below steps.
# vxdg -C import
# vxdg deport
# vxdg -s import
# cfsdgadm add
# cfsdgadm activate
# cfsmntadm add (dgname) (vol name) /(mount_point) all=rw
# cfsmount /cfs1
Closing this thread.
Thanks to all...
Pramod