Simpler Navigation for Servers and Operating Systems
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
General
cancel
Showing results for 
Search instead for 
Did you mean: 

Delete millions of records in a table - easier methode needed

Nirmalkumar
Frequent Advisor

Delete millions of records in a table - easier methode needed

Hello all,

iam in process of deleting the records before jan 1 2007 in sys.aud$ table.The auditing is enabled on 02-dec-2000 07:15:11 .i need delete the records from 02-dec-2000 to 31-12-2006.Till date record count is more than 100 million records.

Can you tell the approach which take less time to delete these records and also less genration of archives.

Database version : orracle 8.1.7.2.0

Thanks,
Nirmal.
5 REPLIES
Grayh
Trusted Contributor

Re: Delete millions of records in a table - easier methode needed

db2-l db2-lAll Groups Ignore this text box. It is used to detect spammers.If you enter anything into this text box, no search results will be displayed.
Database Groups > Technical-functional > db2-l > Message
About This Group | Invite peers to join this group

<< Prev thread < Prev msg Next msg > Next thread >>
Need to delete 15 million records from production table
Reply from Bill Brooks on 4/9/2008 3:34:00 PM

Do this:
# Get the number of bytes needed to delete from your table:
db2 "select decimal(CARD)/decimal(NPAGES) * PAGESIZE * 1000000
bytestodelete from syscat.tables t, syscat.tablespaces t2 where
t.tabname='YOURTABLENAMEGOESHERE' and
t.tabschema='YOURTABLESCHEMAGOESHERE' and t.TBSPACEID=t2.TBSPACEID"
# Now get the number of bytes you have in your log space here
db2 "get db cfg for " | grep "LOGFILSIZ"
db2 "get db cfg for " | grep "LOGPRIMARY"
db2 "get db cfg for " | grep "LOGSECOND"
Now do this operation from the numbers you obtained from above
LOGPRIMARY + LOGSECOND * LOGFILSIZ
If bytestodelete is > LOGPRIMARY + LOGSECOND * LOGFILSIZ you'll need a
cursor of some sort. If bytestodelete is < LOGPRIMARY + LOGSECOND *
LOGFILSIZ then you'll probably be ok. Make sure you have some overhead
for other transactions.
You should also think about a reorg chk after you run your delete. I'm
sure it'll tell you it needs a reorg after 1,000,000 rows deleted. If
you don't have time for a reorg at the minimum do a runstats.
Grayh
Trusted Contributor

Re: Delete millions of records in a table - easier methode needed

Here is one more:----

-- In this example we are deleting 1,000 rows at a time.

Set @num = (Select Count(*) from transactions
where datediff(hh,dt_tm_created, getdate()) > 72 and OrderId =0)
While @num <> 0
Begin
begin transaction
set rowcount 1000
Delete from transactions
where datediff(hh,dt_tm_created, getdate()) > 72 and OrderId =0
set @num = @@rowcount
commit transaction
end
Hein van den Heuvel
Honored Contributor

Re: Delete millions of records in a table - easier methode needed

Not sure sys.aud$ would be different from any other table.

Use "CREATE TABLE AS SELECT ... WITH date > 31-12-2006..."

On success rename aud$ to old, and new-aud to aud$.

Truncate or delete old when ready.

See Oracle doc, or google for "CREATE TABLE AS SELECT" for details and examples.

btw... Maybe you want date based partitioning (a year at a time) for the new table?

hth,
Hein.


Re: Delete millions of records in a table - easier methode needed

Hi,

Agree with Hein van den Heuvel , the best way is to create tmp table using CREATE TABLE tmp as select .. where â ¦ (make sure that your tablespace has enough free space to create tmp teble)

rename aud$ to xxx
rename tmp to aud$

And you also need to create index on new aud$

This method normally do not generate lots of redo

If you have access to Metalink have a look at Note:1019377.6

Regards
Vadim
Yogeeraj_1
Honored Contributor

Re: Delete millions of records in a table - easier methode needed

hi Nirmal,

As mentioned by Hein and Vadim, unless you make sense out of the data in the sys,aud$ table, you may simply truncate the table.

As a precautionary measure, you can backup the data to a temporary table using:

create table tmp_sys_aud as select * from sys.aud$;

good luck!
kind regads
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)