Databases
cancel
Showing results for 
Search instead for 
Did you mean: 

9i on hpux with service guard

monto
Occasional Advisor

9i on hpux with service guard

Gurus,

We have a oracle db on 9.2.0.8 on hpux(11.1)running with serviceguard.

[prd2 hp206:/pkg1/oracle/product]
$ /usr/sbin/swlist | grep -i serviceguard
B3935DA A.11.16.00 Serviceguard
B8325BA A.04.00 ServiceGuard Manager
T1859BA A.11.16.00 Serviceguard Extension for RAC

and when i'm doing srvctl status on database its showing...

$ srvctl status database -d prd
Instance prd1 is running on node hp207
Instance prd2 is running on node hp206

$ srvctl config database -d prd
hp207 prd1 /pkg1/oracle/product/9.2.0
hp206 prd2 /pkg1/oracle/product/9.2.0

but i'm confused as i don't see ant cluster manager running.

$ ps -ef |grep oracm
oracle 15806 2159 0 11:15:15 pts/1 0:00 grep oracm

Is it running oracle RAC with serviceguard on top or is it just serviceguard?Where do i need to look or process to make sure its oracle 9i RAC with cluster manager as cluster software.

5 REPLIES
Steven E. Protter
Exalted Contributor

Re: 9i on hpux with service guard

Shalom,

srvctl is not a serviceguard command.

Try cmviewcl -v

This will show you what if anything is running in the service guard cluster.

9i RAC required serviceguard on HP-UX. Newer versions of RAC have their own clustering scheme.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
monto
Occasional Advisor

Re: 9i on hpux with service guard

here it is...

$ /usr/sbin/cmviewcl -v

CLUSTER STATUS
BOLTprod up

NODE STATUS STATE
hp206 up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/6/1/0/6/0 lan4
PRIMARY up 0/6/1/0/6/1 lan5
STANDBY up 0/2/1/0/6/0 lan2
STANDBY up 0/2/1/0/6/1 lan3

PACKAGE STATUS STATE AUTO_RUN NODE
ora_batch up running enabled hp206

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback automatic

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled hp206 (current)

PACKAGE STATUS STATE AUTO_RUN NODE
sqa_pkg1 up running enabled hp206

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback automatic

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled hp206 (current)

PACKAGE STATUS STATE AUTO_RUN NODE
batch_pkg up running enabled hp206

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled hp206 (current)

NODE STATUS STATE
hp207 up running

Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/6/1/0/6/0 lan4
STANDBY up 0/2/1/0/6/0 lan2
PRIMARY up 0/6/1/0/6/1 lan5
STANDBY up 0/2/1/0/6/1 lan3

PACKAGE STATUS STATE AUTO_RUN NODE
ora_online up running enabled hp207

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback automatic

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled hp207 (current)

PACKAGE STATUS STATE AUTO_RUN NODE
sqa_pkg2 up running enabled hp207

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback automatic

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled hp207 (current)

[prd2 hp206:/dnbusr1/oracle]
$
Emil Velez
Honored Contributor

Re: 9i on hpux with service guard

Evidently you are using a oracle RAC database with Serviceguard using Raw logical volumes.

Oracle RAC 10GR2 and Oracle RAC 11 can also be configured to work with serviceguard and can use either Raw Logical volumes for storage or a Cluster file system.

Seems like a pretty old version of HPUX and SG.

If its not broke you may want to not make changes unless you carefully plan any upgrade.
monto
Occasional Advisor

Re: 9i on hpux with service guard

so just using sgeRAC i,e os clustering(vendor) not oracle clustering solution like cluster manager for 9i RAC right?
monto
Occasional Advisor

Re: 9i on hpux with service guard

i started gsd on node successfully ,but on node two its failing with error.
[prd1 hp207:/dnbusr1/oracle/dbadmin/log]
$ gsdctl start
Failed to start GSD on local node

i started the instance manually.is it going to be a problem if started manually?what could be the problem?please suggest.