1748113 Members
3439 Online
108758 Solutions
New Discussion юеВ

Query Optimization.

 
SOLVED
Go to solution

Query Optimization.

Hi All,

I believe some one help me to sorted out my query problem.

Let me explain.
1. I am getting every month dump file, which contain about more then 12 Millions records, but every day same dump file with about 2 Millions records..
2. I have created two tables Account_Detail and Account_Detail_Delta. Account_Detail table is a master table and Account_Detail_delta is temporory table which is used by sql loader.
3. I have update master table wiht delta table.
4. But Always I am getting problem of rollback too small and It is taking very very long time. But some time it is updated without any problem.
5. So account_detail table is not updating reguler basis and my clients are getting old billing information most of time.

6. Please have look my procedure and sql loader controll file. Give me usefull solution and optimize the query if you can.

Attached file is:
1. ABD.zip (ABD_Delta.prc,ABDBatchLoader, abd_delta.ctl).
There are three files are contain in zip file.

Thanks and regards

GSM
Ghulam
19 REPLIES 19
Steven E. Protter
Exalted Contributor

Re: Query Optimization.

Recommendations:

Put rollback on your fastest disk. Oracle recommends raid 1 or raid 10, not raid 5.

Make a large temporary rollback area and use that.

I'm not sure I can help with the query, not so gifted in that arena.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com

Re: Query Optimization.

Thanks a lot SEP,

I think rollback segment is on RAID 5. Now I can't move. I have asked HP expert he said HP-UX 11.00 is using AUTO-RAID on my server. If I go for RAID 01 then I need to do all things from scratch. Which is can't affoard this time.

Once agian Thanks for prompt repy.

GSM
Ghulam
Emilio Brusa
Frequent Advisor

Re: Query Optimization.

Hi:
I had a transactional database in raid 5 in a VA and was very slow, after i change the raid 0+1 and is faster.

If you cant change all data a raid o+1.
change the rollbacks in 0+1 or raid 1

*check the rollback tablespace maybe dont have space.

*the rollback too small, come with Snapshoot error?

E.
Volker Borowski
Honored Contributor
Solution

Re: Query Optimization.

Hi GSM,

Optimize approach:
==================

delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta);

Check, how many duplicates you have on acctnumber.
If many, you do a lot of useless deletes !
Use (select distinct acctnumber ...) in this case.
This will be a little bit more expensive for the subselect,
but the delete should benefit from it.
If no duplicates, leave it this way.

------------------------------------------

insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a;


This loads everything from the delta table to the master table within one transaction.
Depending on the indexes build on account_detail, this can be an expensive thing, esp. in terms of rollback.
Since you deleted data right before, an index rebuild might be better in any case. So
1) drop secondary non-unique indexes on account_detail
2) Do the insert
3) rebuild the indexes

------------------------------------------

delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta where substr(acctnumber, 10, 1)='1');

insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a
where substr(a.acctnumber, 10, 1)='1';

Why do you do this ?
You already deleted all rows before you copied them.
Now you delete some of those, you just copied again, just to reinsert them for the second time?????

----------------------------------------

sqlloader procedure:

It might be worth to check if you can split the datafiles
into portions of 200.000 records and do the load in parallel.

Rebuild the delta table with increased FREELISTS to 5..10
and FREELIST GROUPS to 2.
Wait for all loads to complete before you call the procedure.


Hope this helps
Volker
Yogeeraj_1
Honored Contributor

Re: Query Optimization.

hi GSM,

i would also add..

You can also try using external tables which gives you these advantages:

o merge a flat file with an existing table in one statement.
o sort a flat file on the way into a table you want compressed nicely.
o do a parallel direct path load -- without splitting up the input file and so on
o run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)
o do multi-table inserts
o flow the data through a pipelined plsql function for cleansing/transformation

and so on.

They allow you to get data into the database without having to use sqlldr in the first place.

You would not normally query them day to day in an operational system, you use them to load data.

hope this helps too!

regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)

Re: Query Optimization.

Dear Yogeeraj,

I am using Oracle 8.1.7.0. Will merge command work on Oracle 8i. Could you please send me some sample.

2. sort a flat file on the way into a table you want compressed nicely.

How? I am not expert in PLSQL. Could you please send any sample.

3.run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)

How?


I hope you will not mind on my stupid questions.

regards

GSM
Ghulam
Yogeeraj_1
Honored Contributor

Re: Query Optimization.

hi,
these are not stupid questions at all.

The merge and external tables are features of the 9i Database.

Am sorry that this does not help you much.

best wishes and
regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Printaporn_1
Esteemed Contributor

Re: Query Optimization.

Hi GSM,

Regarding rollback too small , try increase optimal size also put transaction in large rollback segment may help.
enjoy any little thing in my life
zhuchao
Advisor

Re: Query Optimization.

Hi, to use some specific rollback segment, use dbms_transaction.USE_ROLLBACK_SEGMENT. And you must use it for every sql after commit.
to speedup your SQL, you should check the explain plan of your sql.
like:
sqlplus username/password
explain plan for
YOUR_SQL_HERE;

@?/rdbms/admin/utlxpls

you can consider creating a function index on your delta table which will speedup your tables.
Also consider create a global temporary table for your delta which will reduce your redo.
www.happyit.net