- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Query Optimization.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-26-2003 07:38 AM
12-26-2003 07:38 AM
I believe some one help me to sorted out my query problem.
Let me explain.
1. I am getting every month dump file, which contain about more then 12 Millions records, but every day same dump file with about 2 Millions records..
2. I have created two tables Account_Detail and Account_Detail_Delta. Account_Detail table is a master table and Account_Detail_delta is temporory table which is used by sql loader.
3. I have update master table wiht delta table.
4. But Always I am getting problem of rollback too small and It is taking very very long time. But some time it is updated without any problem.
5. So account_detail table is not updating reguler basis and my clients are getting old billing information most of time.
6. Please have look my procedure and sql loader controll file. Give me usefull solution and optimize the query if you can.
Attached file is:
1. ABD.zip (ABD_Delta.prc,ABDBatchLoader, abd_delta.ctl).
There are three files are contain in zip file.
Thanks and regards
GSM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-26-2003 08:08 AM
12-26-2003 08:08 AM
Re: Query Optimization.
Put rollback on your fastest disk. Oracle recommends raid 1 or raid 10, not raid 5.
Make a large temporary rollback area and use that.
I'm not sure I can help with the query, not so gifted in that arena.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-26-2003 08:16 AM
12-26-2003 08:16 AM
Re: Query Optimization.
I think rollback segment is on RAID 5. Now I can't move. I have asked HP expert he said HP-UX 11.00 is using AUTO-RAID on my server. If I go for RAID 01 then I need to do all things from scratch. Which is can't affoard this time.
Once agian Thanks for prompt repy.
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-26-2003 10:49 AM
12-26-2003 10:49 AM
Re: Query Optimization.
I had a transactional database in raid 5 in a VA and was very slow, after i change the raid 0+1 and is faster.
If you cant change all data a raid o+1.
change the rollbacks in 0+1 or raid 1
*check the rollback tablespace maybe dont have space.
*the rollback too small, come with Snapshoot error?
E.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-26-2003 10:24 PM
12-26-2003 10:24 PM
SolutionOptimize approach:
==================
delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta);
Check, how many duplicates you have on acctnumber.
If many, you do a lot of useless deletes !
Use (select distinct acctnumber ...) in this case.
This will be a little bit more expensive for the subselect,
but the delete should benefit from it.
If no duplicates, leave it this way.
------------------------------------------
insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a;
This loads everything from the delta table to the master table within one transaction.
Depending on the indexes build on account_detail, this can be an expensive thing, esp. in terms of rollback.
Since you deleted data right before, an index rebuild might be better in any case. So
1) drop secondary non-unique indexes on account_detail
2) Do the insert
3) rebuild the indexes
------------------------------------------
delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta where substr(acctnumber, 10, 1)='1');
insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a
where substr(a.acctnumber, 10, 1)='1';
Why do you do this ?
You already deleted all rows before you copied them.
Now you delete some of those, you just copied again, just to reinsert them for the second time?????
----------------------------------------
sqlloader procedure:
It might be worth to check if you can split the datafiles
into portions of 200.000 records and do the load in parallel.
Rebuild the delta table with increased FREELISTS to 5..10
and FREELIST GROUPS to 2.
Wait for all loads to complete before you call the procedure.
Hope this helps
Volker
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-28-2003 03:46 PM
12-28-2003 03:46 PM
Re: Query Optimization.
i would also add..
You can also try using external tables which gives you these advantages:
o merge a flat file with an existing table in one statement.
o sort a flat file on the way into a table you want compressed nicely.
o do a parallel direct path load -- without splitting up the input file and so on
o run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)
o do multi-table inserts
o flow the data through a pipelined plsql function for cleansing/transformation
and so on.
They allow you to get data into the database without having to use sqlldr in the first place.
You would not normally query them day to day in an operational system, you use them to load data.
hope this helps too!
regards
Yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-28-2003 06:50 PM
12-28-2003 06:50 PM
Re: Query Optimization.
I am using Oracle 8.1.7.0. Will merge command work on Oracle 8i. Could you please send me some sample.
2. sort a flat file on the way into a table you want compressed nicely.
How? I am not expert in PLSQL. Could you please send any sample.
3.run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)
How?
I hope you will not mind on my stupid questions.
regards
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-28-2003 09:02 PM
12-28-2003 09:02 PM
Re: Query Optimization.
these are not stupid questions at all.
The merge and external tables are features of the 9i Database.
Am sorry that this does not help you much.
best wishes and
regards
Yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2003 06:41 PM
12-29-2003 06:41 PM
Re: Query Optimization.
Regarding rollback too small , try increase optimal size also put transaction in large rollback segment may help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-04-2004 08:32 PM
01-04-2004 08:32 PM
Re: Query Optimization.
to speedup your SQL, you should check the explain plan of your sql.
like:
sqlplus username/password
explain plan for
YOUR_SQL_HERE;
@?/rdbms/admin/utlxpls
you can consider creating a function index on your delta table which will speedup your tables.
Also consider create a global temporary table for your delta which will reduce your redo.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2004 07:34 AM
01-05-2004 07:34 AM
Re: Query Optimization.
Just a couple of suggestions on the rollback issue.
(Do the following in a test environment)
1. Put a commit after the delete. This will reduce the rollback requirements.
2. Create a tablespace with one rollback segment. Autoextend the tablespace. Let the number of extents for the rollback segment be unlimited.
3. Use the Set Transaction statement to force the transaction to use this rollback segment.
4. Don't use the optimal option.
5. Observe the rollback resources that the transaction needs.
Best wishes.
Rick.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-06-2004 03:31 AM
01-06-2004 03:31 AM
Re: Query Optimization.
There are two tables.
1. Account_Detail
2. Account_Detail_Delta
9M records in Account_Detail and 400000 records in Account_Detail_Delta.
Please have look query which I am applying and tell me how long it will take. I have indexes on both tables on acctnumber.
delete from account_detail where acctnumber in ( select acctnumber from account_detail_delta);
commit;
/* Insert new values */
insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a;
I have 210MB rollback segment for this job.
I want to finish this task within minimum time. Please help me this is chalange for me.
I did apply many tips but not successed yet. I am using Oracle 8i 8.1.7.4 on HP 11.00.
4 CPU and 4GB Ram.
What you think about bellow?
I have used IN fuction in query. If I use EXISTS function what will be happen?
What you think about fuctiuon based indexes will it be usefull.
Please give advice.
Thanks
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-06-2004 08:11 AM
01-06-2004 08:11 AM
Re: Query Optimization.
Your insert looks good. Assuming that the information gets updated frequently, these commands might also work:
update account_detail acd set xxx = xxx where exists (select 1 from account_detail_delta acdd where acd.account_number = acdd.account_number);
insert into account_detail
(select * from account_detail_delta acdd1 where acd.account_number = acdd1.account_number)
where not exists
(select 1 from account_detail_delta acdd2
where acd.account_number = acdd2.account_number)
I doubt that these would show better improvement than what you have, but throwing them out there in case you want to try it.
Also, I would make sure that the indexes on those tables are unique. This should help with the index as well (no searching for duplicate keys).
Thanks,
Brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-06-2004 07:36 PM
01-06-2004 07:36 PM
Re: Query Optimization.
Please have look my Tables and Indexs creation script. Please advice me.
One more thing current environment is taking more then 4 hours. As I told that 9m records are in Account_detail and 400000 records in Account_Detail_Delta.
Please find attachment.
Thanks
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-07-2004 09:45 PM
01-07-2004 09:45 PM
Re: Query Optimization.
Try the following (I combined the two statements together, so the delete caught both duplicates from account_detail_delta and all of the accounts ending in '1. Then, you only need one insert statement, which should help some.)
delete from account_detail where acctnumber in (select acctnumber from account_detail_delta) or acctnumber like '%1';
insert into account_detail ( ACCTNUMBER, NEXT_BILL_DATE, LAST_BILL_DATE, LAST_BILL_AMOUNT, PREV_PAYMENT, ACCT_BALANCE, PAYMENT_DUE_DATE, PREV_BILL_AMOUNT, NET_PREV_AMT, CURRENT_SE_CHARGE,
TOTAL_CALL, TOTAL_CURRENT_BILL, NET_ADJUSTMENT)
select ACCTNUMBER, NEXT_BILL_DATE, LAST_BILL_DATE, LAST_BILL_AMOUNT, PREV_PAYMENT, ACCT_BALANCE,
PAYMENT_DUE_DATE, PREV_BILL_AMOUNT, NET_PREV_AMT, CURRENT_SE_CHARGE, TOTAL_CALL, TOTAL_CURRENT_BILL, NET_ADJUSTMENT
from account_detail_delta a;
Let me know how well this works. If you are getting perfomance problems on the delete, do the following:
-----------
@?/rdbms/admin/utlxplan
truncate table plan_table;
explain plan for
delete from account_detail where acctnumber in (select acctnumber from account_detail_delta) or acctnumber like '%1';
set linesize 150
set pagesize 1000
column operation format a50
column optimizer format a15
SELECT LPAD(' ',2*(LEVEL-1))||operation operation, optimizer,options, object_name,
position, cost from plan_table START WITH id = 0 CONNECT BY PRIOR id = parent_id;
-----------
If you don't think this will work, let me know.
Thanks,
Brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-08-2004 05:00 AM
01-08-2004 05:00 AM
Re: Query Optimization.
Why this condition "like '%1'";?
Thanks
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-08-2004 07:53 AM
01-08-2004 07:53 AM
Re: Query Optimization.
The like '%1' means that it would find any accounts ending in 1.
I should verify that all of your accounts are 10 characters in size (no shorter lenghts that could come up with this). If this is true, you will need to continue with the substr string. One possiblity is to create a functional index that would do this, and then hope that the delete statement will do a merge join on them (we can build a hint to force it).
Brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-08-2004 04:44 PM
01-08-2004 04:44 PM
Re: Query Optimization.
I believe you are looking my old queries. Please have look after modification.
delete from account_detail where acctnumber in ( select acctnumber from account_detail_delta);
commit;
/* Insert new values */
insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL, NET_ADJUSTMENT
from account_detail_delta a;
You see my third last reply where I have post same queries. Now have we to do work around on this query or have to think any alternative tips on it.
Thanks a lot.
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-08-2004 04:47 PM
01-08-2004 04:47 PM
Re: Query Optimization.
THanks,
Brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-08-2004 06:22 PM
01-08-2004 06:22 PM
Re: Query Optimization.
I am on three days holidays. I'll post on Monday.
Thanks a lot for helping.
warm regds
GSM