Showing results for 
Search instead for 
Did you mean: 

Waiting for clusterware split-brain resolution

Regular Advisor

Waiting for clusterware split-brain resolution

This is Oracle RAC with 2 instances. Instance 2 showed the below
message in the alert ,
Mon Oct 11 10:44:20 2004
Waiting for clusterware split-brain resolution
At the same time, we can't connect to instance 2 via sqlplus '/ as sysdba'.
Therefore we had no choice to reboot the box which instance 2 is on.
after the box running, we tried to startup instance 2 but still failed to open.
It was still on mounted status. Suddently, instance 1 hung due to instance 2
can't be open successfully. The only way is to shutdown abort instance 1. After
this, we startup instance 1 and instance 2 without fail.

Due to application, we can't upgrade oracle version.

How can I do?

Sunil Sharma_1
Honored Contributor

Re: Waiting for clusterware split-brain resolution

Hi Eric,

I believe this is normal message. Please see this thread for more information.

*** Dream as if you'll live forever. Live as if you'll die today ***
Indira Aramandla
Honored Contributor

Re: Waiting for clusterware split-brain resolution

Hi Ericfjchen,

This error occurs when a member was evicted from the group by another member of the cluster database for one of several reasons.

When the problem is detected the instances 'race' to get a lock on the control file (Results Record lock) for updating. The instance that obtains the lock tallies the votes of the instances to decide membership. A member is evicted if:

A communications link is down or
There is a split-brain (more than 1 subgroup) and the member is not in the largest subgroup or
The member is perceived to be inactive.

There is also the other possiblity with Oracle Server - Enterprise Edition - Version: 9.0 to 9.2 with HP-UX PA-RISC (64-bit).

This is due to OS vendor problem which is reported and identified in Oracle Bug 3007107 "FREQUENT ORA-29740 EVICTIONS OCCURING WITH MINIMAL ACTIVITY"

Contact HP and get the PHKL_28695 HP patch.
Workaround for the bug would be to restart the databases on both nodes.

Indira A
Never give up, Keep Trying