- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Hp-UX 11.11 - VCS3.5 three node cluster with oracl...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2007 04:31 PM
10-04-2007 04:31 PM
I am trying to set up a HP-UX 11.11 - VCS 3.5 cluster with Oracle 9i RAC on 3 nodes. I am facing a problem in bringing up the cluster status UP.
Steps Followed :
I followed the dbac_icg.pdf procedures to setup my cluster.
1. Private IP's are configured on each node and all three nodes are pinging via public/private IP's
2. In each node, rlogin is working without prompting username/passwd.
3. All cluster prerequisites are made .
4. database_ac_for_oracle9i/installDBAC script is executed.
5. All required ( cluster name, unique id, IP's ..etc ) details are provided and it installed all the packages and three nodes are rebooted.
6. Executed the below commands after the node reboot..
/sbin/init.d/llt start
/sbin/init.d/gab start
/sbin/init.d/vcsmm start
/sbin/init.d/lmx start
/sbin/init.d/odm start
Hastart
then verified the cluster status ;
corniche# hastatus
attempting to connect....connected
group resource system message
--------------- -------------------- --------------- --------------------
corniche RUNNING
coronet RUNNING
zephyr RUNNING
corniche# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen e450e407 membership 012
Port h gen d75ed703 membership 012
Port o gen e450e407 membership 012
corniche# hastatus -summary
-- SYSTEM STATE
-- System State Frozen
A corniche RUNNING 0
A coronet RUNNING 0
A zephyr RUNNING 0
corniche#
Odm port -d is not running and the cluster is not in UP status. Please pass me the hints to troubleshoot this problem and making the cluster UP.
~Saravanan
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2007 05:43 PM
10-04-2007 05:43 PM
Solutionhttp://ftp.support.veritas.com/pub/support/products/DBE_Advanced_Cluster_for_Oracle_RAC/286508.pdf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2007 06:28 PM
10-04-2007 06:28 PM
Re: Hp-UX 11.11 - VCS3.5 three node cluster with oracle 9i RAC ..
Have you tuned symantec kernel parameter mm_slave_max ? This is one of the required parameter to be tuned for VCS
#kmtune -s
WK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2007 07:00 PM
10-04-2007 07:00 PM
Re: Hp-UX 11.11 - VCS3.5 three node cluster with oracle 9i RAC ..
Hi,
Thanks for a useful hint. I followed the steps mentioned in page 20 and but still i could not make the cluster UP.
corniche# ls -al /sbin/rc2.d/*odm
lrwxr-xr-x 1 root sys 16 Jan 3 08:25 /sbin/rc2.d/S980odm -> /sbin/init.d/odm
corniche# rm -f /sbin/rc2.d/S980odm
corniche# cat /dev/odm/cluster ==> This file itself missing in my setup.
cat: Cannot open /dev/odm/cluster: No such file or directory
corniche#
corniche# hagrp -state
#Group Attribute System Value
ClusterService State corniche |ONLINE|
ClusterService State coronet |OFFLINE|
ClusterService State zephyr |OFFLINE|
corniche#
I did uninstallation through uninstallDBAC script and tried to set the cluster freshly. At Every install, i am getting the same status.
Please throw some light to solve this problem.
With regards,
Saravanan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2007 08:36 PM
10-04-2007 08:36 PM
Re: Hp-UX 11.11 - VCS3.5 three node cluster with oracle 9i RAC ..
http://www.symantec.com/enterprise/support/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-05-2007 03:52 AM
10-05-2007 03:52 AM
Re: Hp-UX 11.11 - VCS3.5 three node cluster with oracle 9i RAC ..
There has to be /sbin/rc2.d/S980odm file which is needed to start the ODM driver during bootup. It is linked as
# ln -s /sbin/init.d/odm /sbin/rc2.d/S980odm
/dev/odm/cluster file will only show up if the /dev/odm is mounted.
Assuming if the odm is not started during bootup, you can start it using
# /sbin/init.d/odm start
If there is a problem, it will show you why ODM can't be up.