1777014 Members
1962 Online
109062 Solutions
New Discussion юеВ

RAC/SG split brain

 
Marvin Strong
Honored Contributor

RAC/SG split brain

While testing a RAC cluster, if we pull the first node from the network. Any oracle requests to the second node, cause they database to come down.

the following is from the oracle alert log.:

Wed May 19 16:25:16 2004
Waiting for clusterware split-brain resolution
Wed May 19 16:25:17 2004
Errors in file /oracle/app/admin/admin/PDCSPRD/bdump/pdcsprd2_lmon_16718.trc:
ORA-29740: evicted by member 1, group incarnation 18
LMON: terminating instance due to error 29740
Wed May 19 16:25:17 2004
Errors in file /oracle/app/admin/admin/PDCSPRD/bdump/pdcsprd2_lmd0_16720.trc:
ORA-29740: evicted by member , group incarnation
Instance terminated by LMON, pid = 16718


It looks like there is a split brain condition occuring but, I'm confused how, this is a 2 node cluster with 1 lock vg and 1 lock pv.
It also has no packages.

I dont see anything strange in the syslog.

I have been searching the web all morning, and found many documents on split brain, but nothing telling me how to correct the problem.

anyone have any advice here?
5 REPLIES 5
Navin Bhat_2
Trusted Contributor

Re: RAC/SG split brain

Here is a note from Oracle on how to resolve this issue. Hope it helps.

Note:219361.1
PURPOSE
=======
This note was created to troubleshoot the ORA-29740 error in a Real Application
Clusters environment.
SCOPE & APPLICATION
====================
This note is for DBA's needing to resolve ORA-29740.
Troubleshooting ORA-29740 in a RAC Environment
==============================================
An ORA-29740 error occurs when a member was evicted from the group by another
member of the cluster database for one of several reasons, which may include
a communications error in the cluster, failure to issue a heartbeat to the
control file, and other reasons. This mechanism is in place to prevent
problems from occuring that would affect the entire database. For example,
instead of allowing a cluster-wide hang to occur, Oracle will evict the
problematic instance(s) from the cluster. When an ORA-29740 error occurs, a
surviving instance will remove the problem instance(s) from the cluster.
When the problem is detected the instances 'race' to get a lock on the
control file (Results Record lock) for updating. The instance that obtains
the lock tallies the votes of the instances to decide membership. A member
is evicted if:
a) A communications link is down
b) There is a split-brain (more than 1 subgroup) and the member is
not in the largest subgroup
c) The member is perceived to be inactive
Sample message in Alert log of the evicted instance:
Fri Sep 28 17:11:51 2001
Errors in file /oracle/export/TICK_BIG/lmon_26410_tick2.trc:
ORA-29740: evicted by member %d, group incarnation %d
Fri Sep 28 17:11:53 2001
Trace dumping is performing id=[cdmp_20010928171153]
Fri Sep 28 17:11:57 2001
Instance terminated by LMON, pid = 26410
The key to resolving the ORA-29740 error is to review the LMON trace files
from each of the instances. On the evicted instance we will see something
like:
*** 2002-11-20 18:49:51.369
kjxgrdtrt: Evicted by 0, seq (3, 2)
^
|
This indicates which instance initiated the eviction.
On the evicting instance we will see something like:
kjxgrrcfgchk: Initiating reconfig, reason 3
*** 2002-11-20 18:49:29.559
kjxgmrcfg: Reconfiguration started, reason 3
...
*** 2002-11-20 18:49:29.727
Obtained RR update lock for sequence 2, RR seq 2
*** 2002-11-20 18:49:31.284
Voting results, upd 0, seq 3, bitmap: 0
Evicting mem 1, stat 0x0047 err 0x0002
You can see above that the instance initiated a reconfiguration for reason 3
(see Note 139435.1 for more information on reconfigurations). The
reconfiguration is then started and this instance obtained the RR lock
(Results Record lock) which means this instance will tally the votes of the
instances to decide membership. The last lines show the voting results then
this instance evicts instance 1.
For troubleshooting ORA-29740 errors, the 'reason' will be very important.
In the above example, the first section indicates the reason for the
initiated reconfiguration. The reasons are as follows:
Reason 0 = No reconfiguration
Reason 1 = The Node Monitor generated the reconfiguration.
Reason 2 = An instance death was detected.
Reason 3 = Communications Failure
Reason 4 = Reconfiguration after suspend
For ORA-29740 errors, you will most likely see reasons 1, 2, or 3.
-----------------------------------------------------------------------------
Reason 1: The Node Monitor generated the reconfiguration. This can happen if:
a) An instance joins the cluster
b) An instance leaves the cluster
c) A node is halted
It should be easy to determine the cause of the error by reviewing the alert
logs and LMON trace files from all instances. If an instance joins or leaves
the cluster or a node is halted then the ORA-29740 error is not a problem.
ORA-29740 evictions with reason 1 are usually expected when the cluster
membership changes. Very rarely are these types of evictions a real problem.
If you feel that this eviction was not correct, do a search in Metalink or
the bug database for:
ORA-29740 'reason 1'
Important files to review are:
a) Each instance's alert log
b) Each instance's LMON trace file
c) Statspack reports from all nodes leading up to the eviction
d) Each node's syslog or messages file
-----------------------------------------------------------------------------
Reason 2: An instance death was detected. This can happen if:
a) An instance fails to issue a heartbeat to the control file.
When the heartbeat is missing, LMON will issue a network ping to the instance
not issuing the heartbeat. As long as the instance responds to the ping,
LMON will consider the instance alive. If, however, the heartbeat is not
issued for the length of time of the control file enqueue timeout, the
instance is considered to be problematic and will be evicted.
Common causes for an ORA-29740 eviction (Reason 2):
a) NTP (Time changes on cluster) - usually on Linux, Tru64, or IBM AIX
b) Network Problems (SAN).
c) Resource Starvation (CPU, I/O, etc..)
d) An Oracle bug.
Common bugs for reason 2 evictions:
BUG 2820871 - Abrupt time adjustments can crash instance with ORA-29740
(Reason 2) (Linux Only)
Fixed-Releases: 9204+ A000
If you feel that this eviction was not correct, do a search in Metalink or the
bug database for:
ORA-29740 'reason 2'
Important files to review are:
a) Each instance's alert log
b) Each instance's LMON trace file
c) Statspack reports from all nodes leading up to the eviction
d) The CKPT process trace file of the evicted instance
e) Other bdump or udump files...
f) Each node's syslog or messages file
-----------------------------------------------------------------------------
Reason 3: Communications Failure. This can happen if:
a) The LMON processes loose communication between one another.
b) One instance loses communications with the LMD process of another
instance.
c) An LMON process is blocked, spinning, or stuck and is not
responding to the other instance(s) LMON process.
d) An LMD process is blocked or spinning.
In this case the ORA-29740 error is recorded when there are communication
issues between the instances. It is an indication that an instance has been
evicted from the configuration as a result of IPC send timeout. A
communications failure between a foreground, or background other than LMON,
and a remote LMD will also generate a ORA-29740 with reason 3. When this
occurs, the trace file of the process experiencing the error will print a
message:
Reporting Communication error with instance:
If communication is lost at the cluster layer (for example, network cables
are pulled), the cluster software may also perform node evictions in the
event of a cluster split-brain. Oracle will detect a possible split-brain
and wait for cluster software to resolve the split-brain. If cluster
software does not resolve the split-brain within a specified interval,
Oracle proceeds with evictions.
Oracle Support has seen cases where resource starvation (CPU, I/O, etc...) can
cause an instance to be evicted with this reason code. The LMON or LMD process
could be blocked waiting for resources and not respond to polling by the remote
instance(s). This could cause that instance to be evicted. If you have
a statspack report available from the time just prior to the eviction on the
evicted instance, check for poor I/O times and high CPU utilization. Poor I/O
times would be an average read time of > 20ms.
Common causes for an ORA-29740 eviction (Reason 3):
a) Network Problems.
b) Resource Starvation (CPU, I/O, etc..)
c) Severe Contention in Database.
d) An Oracle bug.
Common bugs for reason 3 evictions:
BUG 2276622 - ORA-29740 (Reason 3) possible in RAC under heavy load
Fixed-Releases: 9014+ 9202+
BUG 2994260 - IPCSOCK_SEND FAILED WITH STATUS: 10054 (Windows only)
Fixed-Releases: 9203 with patch or 9204+
BUG 2210879 - ORACLE PROCESS CRASHES, WITH ASSERTION FAILURE IN LOWFAT
SKGXP CODE (HP-UX only with clic interface)
Fixed-Releases: Fixed by HP in PHNE 26551 or above.
Tips for tuning inter-instance performance can be found in the following note:
Note 181489.1
Tuning Inter-Instance Performance in RAC and OPS
If you feel that this eviction was not correct, do a search in Metalink or the
bug database for:
ORA-29740 'reason 3'
Important files to review are:
a) Each instance's alert log
b) Each instance's LMON trace file
c) each instance's LMD trace file
d) Statspack reports from all nodes leading up to the eviction
e) Other bdump or udump files...
f) Each node's syslog or messages file
g) Netstat -i and netstat -s output
-----------------------------------------------------------------------------
References :
[NOTE:139435.1] Fast Reconfiguration in 9i Real Application Clusters
[BUG:2276622] ORA-29740 UNDER HEAVY LOAD
[BUG:1999778] RAC/OPS DATABASE CRASHES WITH ORA-29740 ON RESTART ON FAILED SYSTEM
[BUG:2529223] INSTANCE EVICTED WITH ORA-29740
[NOTE:175678.1] RAC Instances Crash with ORA-29740 or ORA-600 [ksxpwait5] on IBM AIX
[NOTE:212381.1] RAC: Cluster Node evicted due to Change of System Time
JW_8
Occasional Advisor

Re: RAC/SG split brain

Hi Marvin,
How you configure the interconnect for RAC DLM traffic? If you specify interconnect that is different from the lan where hostname resides, then you may run into issue when Oracle and SG has different view of the connectivity.
melvyn burnard
Honored Contributor

Re: RAC/SG split brain

This is actually an Oracle error message, and appears to indicate that there has been a communication loss between the two nodes.
This is especially true if you have Hyperfabric between the nodes, as RAC likes to use this for it's comms.
I suggest you raise a call with Oracle
My house is the bank's, my money the wife's, But my opinions belong to me, not HP!
Ted Buis
Honored Contributor

Re: RAC/SG split brain

What storage are you using and how is it connected. A two node cluster is needs some cluster lock disk, which is why only certain storage is supported. Not sure how you could set it up without one, but it might be good to confirm that you have a supported hardware configuration, and that the cluster lock was properly set up.
Mom 6
Marvin Strong
Honored Contributor

Re: RAC/SG split brain

We are using EMC storage, and there is one cluster lock disk, actually you must have a cluster lock disk for a 2 node cluster.

I have been informed that the split brain error message is normal, when losing a connection to one of the nodes, and it is nothing to worry about. Since it was resolved very quickly if you look at the timestamps.

So I guess I was barking up the wrong tree.

Still investigating that why the first node evicts the second node. When I disconnect the first node.