1827286 Members
2620 Online
109717 Solutions
New Discussion

Query Optimization.

 
SOLVED
Go to solution

Query Optimization.

Hi All,

I believe some one help me to sorted out my query problem.

Let me explain.
1. I am getting every month dump file, which contain about more then 12 Millions records, but every day same dump file with about 2 Millions records..
2. I have created two tables Account_Detail and Account_Detail_Delta. Account_Detail table is a master table and Account_Detail_delta is temporory table which is used by sql loader.
3. I have update master table wiht delta table.
4. But Always I am getting problem of rollback too small and It is taking very very long time. But some time it is updated without any problem.
5. So account_detail table is not updating reguler basis and my clients are getting old billing information most of time.

6. Please have look my procedure and sql loader controll file. Give me usefull solution and optimize the query if you can.

Attached file is:
1. ABD.zip (ABD_Delta.prc,ABDBatchLoader, abd_delta.ctl).
There are three files are contain in zip file.

Thanks and regards

GSM
Ghulam
19 REPLIES 19
Steven E. Protter
Exalted Contributor

Re: Query Optimization.

Recommendations:

Put rollback on your fastest disk. Oracle recommends raid 1 or raid 10, not raid 5.

Make a large temporary rollback area and use that.

I'm not sure I can help with the query, not so gifted in that arena.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com

Re: Query Optimization.

Thanks a lot SEP,

I think rollback segment is on RAID 5. Now I can't move. I have asked HP expert he said HP-UX 11.00 is using AUTO-RAID on my server. If I go for RAID 01 then I need to do all things from scratch. Which is can't affoard this time.

Once agian Thanks for prompt repy.

GSM
Ghulam
Emilio Brusa
Frequent Advisor

Re: Query Optimization.

Hi:
I had a transactional database in raid 5 in a VA and was very slow, after i change the raid 0+1 and is faster.

If you cant change all data a raid o+1.
change the rollbacks in 0+1 or raid 1

*check the rollback tablespace maybe dont have space.

*the rollback too small, come with Snapshoot error?

E.
Volker Borowski
Honored Contributor
Solution

Re: Query Optimization.

Hi GSM,

Optimize approach:
==================

delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta);

Check, how many duplicates you have on acctnumber.
If many, you do a lot of useless deletes !
Use (select distinct acctnumber ...) in this case.
This will be a little bit more expensive for the subselect,
but the delete should benefit from it.
If no duplicates, leave it this way.

------------------------------------------

insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a;


This loads everything from the delta table to the master table within one transaction.
Depending on the indexes build on account_detail, this can be an expensive thing, esp. in terms of rollback.
Since you deleted data right before, an index rebuild might be better in any case. So
1) drop secondary non-unique indexes on account_detail
2) Do the insert
3) rebuild the indexes

------------------------------------------

delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta where substr(acctnumber, 10, 1)='1');

insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a
where substr(a.acctnumber, 10, 1)='1';

Why do you do this ?
You already deleted all rows before you copied them.
Now you delete some of those, you just copied again, just to reinsert them for the second time?????

----------------------------------------

sqlloader procedure:

It might be worth to check if you can split the datafiles
into portions of 200.000 records and do the load in parallel.

Rebuild the delta table with increased FREELISTS to 5..10
and FREELIST GROUPS to 2.
Wait for all loads to complete before you call the procedure.


Hope this helps
Volker
Yogeeraj_1
Honored Contributor

Re: Query Optimization.

hi GSM,

i would also add..

You can also try using external tables which gives you these advantages:

o merge a flat file with an existing table in one statement.
o sort a flat file on the way into a table you want compressed nicely.
o do a parallel direct path load -- without splitting up the input file and so on
o run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)
o do multi-table inserts
o flow the data through a pipelined plsql function for cleansing/transformation

and so on.

They allow you to get data into the database without having to use sqlldr in the first place.

You would not normally query them day to day in an operational system, you use them to load data.

hope this helps too!

regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)

Re: Query Optimization.

Dear Yogeeraj,

I am using Oracle 8.1.7.0. Will merge command work on Oracle 8i. Could you please send me some sample.

2. sort a flat file on the way into a table you want compressed nicely.

How? I am not expert in PLSQL. Could you please send any sample.

3.run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)

How?


I hope you will not mind on my stupid questions.

regards

GSM
Ghulam
Yogeeraj_1
Honored Contributor

Re: Query Optimization.

hi,
these are not stupid questions at all.

The merge and external tables are features of the 9i Database.

Am sorry that this does not help you much.

best wishes and
regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Printaporn_1
Esteemed Contributor

Re: Query Optimization.

Hi GSM,

Regarding rollback too small , try increase optimal size also put transaction in large rollback segment may help.
enjoy any little thing in my life
zhuchao
Advisor

Re: Query Optimization.

Hi, to use some specific rollback segment, use dbms_transaction.USE_ROLLBACK_SEGMENT. And you must use it for every sql after commit.
to speedup your SQL, you should check the explain plan of your sql.
like:
sqlplus username/password
explain plan for
YOUR_SQL_HERE;

@?/rdbms/admin/utlxpls

you can consider creating a function index on your delta table which will speedup your tables.
Also consider create a global temporary table for your delta which will reduce your redo.
www.happyit.net
Rick Meijer
Occasional Advisor

Re: Query Optimization.

Hello,

Just a couple of suggestions on the rollback issue.

(Do the following in a test environment)

1. Put a commit after the delete. This will reduce the rollback requirements.

2. Create a tablespace with one rollback segment. Autoextend the tablespace. Let the number of extents for the rollback segment be unlimited.

3. Use the Set Transaction statement to force the transaction to use this rollback segment.

4. Don't use the optimal option.

5. Observe the rollback resources that the transaction needs.

Best wishes.

Rick.

Re: Query Optimization.

Hi, I am repeating question again.

There are two tables.
1. Account_Detail
2. Account_Detail_Delta

9M records in Account_Detail and 400000 records in Account_Detail_Delta.

Please have look query which I am applying and tell me how long it will take. I have indexes on both tables on acctnumber.

delete from account_detail where acctnumber in ( select acctnumber from account_detail_delta);

commit;

/* Insert new values */

insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a;


I have 210MB rollback segment for this job.

I want to finish this task within minimum time. Please help me this is chalange for me.

I did apply many tips but not successed yet. I am using Oracle 8i 8.1.7.4 on HP 11.00.
4 CPU and 4GB Ram.

What you think about bellow?
I have used IN fuction in query. If I use EXISTS function what will be happen?

What you think about fuctiuon based indexes will it be usefull.

Please give advice.


Thanks

GSM



Ghulam
Brian Crabtree
Honored Contributor

Re: Query Optimization.

If you are going to do this, verify that you have an index on acctnumber with the ACCOUNT_DETAIL and ACCOUNT_DETAIL_DELTA tables. This should improve your performance on the delete. The one for ACCOUNT_DETAIL_DELTA will need to be rebuilt before running it, but you will most likely see an overall time improvement.

Your insert looks good. Assuming that the information gets updated frequently, these commands might also work:

update account_detail acd set xxx = xxx where exists (select 1 from account_detail_delta acdd where acd.account_number = acdd.account_number);

insert into account_detail
(select * from account_detail_delta acdd1 where acd.account_number = acdd1.account_number)
where not exists
(select 1 from account_detail_delta acdd2
where acd.account_number = acdd2.account_number)

I doubt that these would show better improvement than what you have, but throwing them out there in case you want to try it.

Also, I would make sure that the indexes on those tables are unique. This should help with the index as well (no searching for duplicate keys).

Thanks,

Brian

Re: Query Optimization.

Dear Brain,


Please have look my Tables and Indexs creation script. Please advice me.
One more thing current environment is taking more then 4 hours. As I told that 9m records are in Account_detail and 400000 records in Account_Detail_Delta.

Please find attachment.

Thanks

GSM


Ghulam
Brian Crabtree
Honored Contributor

Re: Query Optimization.

Ok.

Try the following (I combined the two statements together, so the delete caught both duplicates from account_detail_delta and all of the accounts ending in '1. Then, you only need one insert statement, which should help some.)

delete from account_detail where acctnumber in (select acctnumber from account_detail_delta) or acctnumber like '%1';

insert into account_detail ( ACCTNUMBER, NEXT_BILL_DATE, LAST_BILL_DATE, LAST_BILL_AMOUNT, PREV_PAYMENT, ACCT_BALANCE, PAYMENT_DUE_DATE, PREV_BILL_AMOUNT, NET_PREV_AMT, CURRENT_SE_CHARGE,
TOTAL_CALL, TOTAL_CURRENT_BILL, NET_ADJUSTMENT)
select ACCTNUMBER, NEXT_BILL_DATE, LAST_BILL_DATE, LAST_BILL_AMOUNT, PREV_PAYMENT, ACCT_BALANCE,
PAYMENT_DUE_DATE, PREV_BILL_AMOUNT, NET_PREV_AMT, CURRENT_SE_CHARGE, TOTAL_CALL, TOTAL_CURRENT_BILL, NET_ADJUSTMENT
from account_detail_delta a;

Let me know how well this works. If you are getting perfomance problems on the delete, do the following:
-----------
@?/rdbms/admin/utlxplan
truncate table plan_table;
explain plan for
delete from account_detail where acctnumber in (select acctnumber from account_detail_delta) or acctnumber like '%1';

set linesize 150
set pagesize 1000
column operation format a50
column optimizer format a15
SELECT LPAD(' ',2*(LEVEL-1))||operation operation, optimizer,options, object_name,
position, cost from plan_table START WITH id = 0 CONNECT BY PRIOR id = parent_id;
-----------

If you don't think this will work, let me know.

Thanks,

Brian

Re: Query Optimization.

Dear Brian,

Why this condition "like '%1'";?

Thanks

GSM

Ghulam
Brian Crabtree
Honored Contributor

Re: Query Optimization.

I'm not sure that the substr would allow for the index to be used on the query. The like should allow the index to be used, and give the same functionality.

The like '%1' means that it would find any accounts ending in 1.

I should verify that all of your accounts are 10 characters in size (no shorter lenghts that could come up with this). If this is true, you will need to continue with the substr string. One possiblity is to create a functional index that would do this, and then hope that the delete statement will do a merge join on them (we can build a hint to force it).

Brian

Re: Query Optimization.

Dear Brian,

I believe you are looking my old queries. Please have look after modification.

delete from account_detail where acctnumber in ( select acctnumber from account_detail_delta);

commit;

/* Insert new values */

insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL, NET_ADJUSTMENT
from account_detail_delta a;


You see my third last reply where I have post same queries. Now have we to do work around on this query or have to think any alternative tips on it.


Thanks a lot.

GSM
Ghulam
Brian Crabtree
Honored Contributor

Re: Query Optimization.

Can you run and post the explain plan on the query that takes the longest?

THanks,

Brian

Re: Query Optimization.

Dear Brain,

I am on three days holidays. I'll post on Monday.

Thanks a lot for helping.

warm regds

GSM
Ghulam