Databases
cancel
Showing results for 
Search instead for 
Did you mean: 

ORA-1555 snap shot too old ;for reference myself, itrc user also could take it

SOLVED
Go to solution
steven chang_1
Regular Advisor

ORA-1555 snap shot too old ;for reference myself, itrc user also could take it

#Fetch across commit
This is the situation when a query opens a cursor, then loops through
fetching, changing, and committing the records on the same table. In
this scenerio, very often an ORA-01555 can result. Let's take the
following example to explain this: A cursor was opened at SCN=10. The
execution SCN of the query is then marked as SCN=10.
Every fetch by that cursor now needs to get the read-consistent data
from SCN=10.The user program is now fetching x numbers of records,
changing them, and committing them. Let's say they were committed with
SCN=20. If a later fetch happens to retrieve a record which is in one
of the previously committed blocks, then the fetch will see that the
SCN there as 20. Since the fetch has to get the snapshot from SCN=10
it will try to find it in the rollback segments. If it could rollback
sufficiently backwards as previously explained, then it could reconstruct
the snapshot from SCN=10. If not, then it will result in an ORA-01555 error.
Committing less often which will result in larger rollback segments will
REDUCE the probabilitof getting 'snapshot too old' error. Here is the staement
example:
DECLARE
cursor c1 is select empno, sal from emp where empno > 0;
-- The WHERE clause should force the index to be traversed
-- and the rows returned ordered by empno
empno_val number;
sal_val number;
BEGIN open c1;
LOOP fetch c1 into empno_val, sal_val;
if c1%FOUND then -- Give a salary raise of 4%
update emp
set sal = sal_val * 1.04
where empno = empno_val;
commit;
else exit;
end if;
END LOOP;
close c1;
END;
/
According to ANSI standard a cursor
is invalidated when a commit is performed and should be closed
and reopened. Oracle allows users to do fetch across commits but
users should be aware that it might result in ORA-01555.

# Fetch across commits with delayed block clean out
To complicate things, now we see how delayed block clean outs play an
important role in getting this error. When a data
or index block is modified in the database and the transaction
committed, Oracle does a fast commit by marking the transaction as
committed in the rollback segment header but does not clean the datablocks
that were modified. The next transaction which does a select on the
modified blocks will do the actual cleanout of the block. This is known
as a delayed block cleanout.
Now, take the same scenario as described in previous section. But instead
of assuming one table, let us assume that there are two tables in question.
i.e: the cursor is opened and then in a loop, it fetches from one table and
changes records in another, and commits. Even though the records are
getting committed in another table it could still cause ORA-01555 because
cleanout has not been done on the table from which the records are being
fetched.
For this case, a full table scan before opening and fetching through the
cursor will help.
steven
3 REPLIES
Hari Kumar
Trusted Contributor
Solution

Re: ORA-1555 snap shot too old ;for reference myself, itrc user also could take it

Thanks Alot !
Good help
Information is Wealth ; Knowledge is Power
Alexander M. Ermes
Honored Contributor

Re: ORA-1555 snap shot too old ;for reference myself, itrc user also could take it

Hi there.
Thank you very much.
An excellent explanation.
Rgds
Alexander M. Ermes
.. and all these memories are going to vanish like tears in the rain! final words from Rutger Hauer in "Blade Runner"
Volker Borowski
Honored Contributor

Re: ORA-1555 snap shot too old ;for reference myself, itrc user also could take it

Hi,

yes, the odd 1555 .......

The real problem is, that the transaction getting the error is usually not responsible for it.

Two things must happen in addition to kill a big cursor with a 1555.

a) a diffent (!) transaction modifies data that belongs to the result set of the cursor in question -> This gives oracle the need to access a rollback block to receive a consistent cursor!

b) this rollback block is overwritten by any (!) other modifying activity.

Ways out:
=====================
1) stop other transactions from getting access to this datablock. So do an explicit LOCK TABLE before the select. This will suspend any update request on this table until the lock is released. Keep in mind that this will have major impact on other transactions, so this may only be a batchtime solution.

2) Disable all rollback-segments but one. Provide enough space for rollback, do not use optimal-storage-clause. This is costy in terms of space and requires a bit of rollback management.

3) Check if you really need a consistent big cursor of all the data. Typically, these problems are related to reporting functions, that refer to closed periods. Partitioning might be a way out, because each partition has its own freelist. So if a table is partitioned on a date field, a delete of a january record will not lead to this block showing up on the freelist for the current month. Be sure not to select ALL, but just "closed" data, and because of the partitioning it is less likely that updates of the current period will affect the result of your cursor-query.
Watch out, partitioning might affect performance!

4) Do a commit even for selects in case you can not use option 3). So instead of

select month, sum(some_data) from t group by month;

Do

select month, sum(some_data) from t where month='01';
commit;
select month, sum(some_data) from t where month='02';
commit;
select month, sum(some_data) from t where month='03';
commit;
....

This cuts down the size of each cursor, and gives each select a new SCN.
Be aware, that this might not be an overall consistent result as the single query before.
=============================

If you search the internet, be assured that all descriptions which suggest to enlarge rollback-size, increase optimal-size, use more rollback-segments will NEVER make sure that you do not get the 1555 !
Those three approaches are for OLTP applications and completely opposed to the approach given under 1) and 2) ! They are valid for a situation where you can not isolate your big query as described above.
Therefore by using these 3 tuning screws, you will only get a better chance, that it does not occur, but no safety !
It might work "one out of two" or "one out of twenty", but may also abort the right next query, no matter what it did last time it had been called.

One of the best choices is to migrate to Oracle 9 and activate automatic-undo-management, which gives you better controlling options to avoid the 1555.

Happy one-five-five-five next time
Volker