- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Query Optimization.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-26-2003 07:38 AM
тАО12-26-2003 07:38 AM
I believe some one help me to sorted out my query problem.
Let me explain.
1. I am getting every month dump file, which contain about more then 12 Millions records, but every day same dump file with about 2 Millions records..
2. I have created two tables Account_Detail and Account_Detail_Delta. Account_Detail table is a master table and Account_Detail_delta is temporory table which is used by sql loader.
3. I have update master table wiht delta table.
4. But Always I am getting problem of rollback too small and It is taking very very long time. But some time it is updated without any problem.
5. So account_detail table is not updating reguler basis and my clients are getting old billing information most of time.
6. Please have look my procedure and sql loader controll file. Give me usefull solution and optimize the query if you can.
Attached file is:
1. ABD.zip (ABD_Delta.prc,ABDBatchLoader, abd_delta.ctl).
There are three files are contain in zip file.
Thanks and regards
GSM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-26-2003 08:08 AM
тАО12-26-2003 08:08 AM
Re: Query Optimization.
Put rollback on your fastest disk. Oracle recommends raid 1 or raid 10, not raid 5.
Make a large temporary rollback area and use that.
I'm not sure I can help with the query, not so gifted in that arena.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-26-2003 08:16 AM
тАО12-26-2003 08:16 AM
Re: Query Optimization.
I think rollback segment is on RAID 5. Now I can't move. I have asked HP expert he said HP-UX 11.00 is using AUTO-RAID on my server. If I go for RAID 01 then I need to do all things from scratch. Which is can't affoard this time.
Once agian Thanks for prompt repy.
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-26-2003 10:49 AM
тАО12-26-2003 10:49 AM
Re: Query Optimization.
I had a transactional database in raid 5 in a VA and was very slow, after i change the raid 0+1 and is faster.
If you cant change all data a raid o+1.
change the rollbacks in 0+1 or raid 1
*check the rollback tablespace maybe dont have space.
*the rollback too small, come with Snapshoot error?
E.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-26-2003 10:24 PM
тАО12-26-2003 10:24 PM
SolutionOptimize approach:
==================
delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta);
Check, how many duplicates you have on acctnumber.
If many, you do a lot of useless deletes !
Use (select distinct acctnumber ...) in this case.
This will be a little bit more expensive for the subselect,
but the delete should benefit from it.
If no duplicates, leave it this way.
------------------------------------------
insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a;
This loads everything from the delta table to the master table within one transaction.
Depending on the indexes build on account_detail, this can be an expensive thing, esp. in terms of rollback.
Since you deleted data right before, an index rebuild might be better in any case. So
1) drop secondary non-unique indexes on account_detail
2) Do the insert
3) rebuild the indexes
------------------------------------------
delete from account_detail
where acctnumber in ( select acctnumber from account_detail_delta where substr(acctnumber, 10, 1)='1');
insert into account_detail ( ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,
PREV_PAYMENT,ACCT_BALANCE,PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,
TOTAL_CALL,TOTAL_CURRENT_BILL,NET_ADJUSTMENT)
select ACCTNUMBER,NEXT_BILL_DATE,LAST_BILL_DATE,LAST_BILL_AMOUNT,PREV_PAYMENT,ACCT_BALANCE,
PAYMENT_DUE_DATE,PREV_BILL_AMOUNT,NET_PREV_AMT,CURRENT_SE_CHARGE,TOTAL_CALL,TOTAL_CURRENT_BILL,
NET_ADJUSTMENT
from account_detail_delta a
where substr(a.acctnumber, 10, 1)='1';
Why do you do this ?
You already deleted all rows before you copied them.
Now you delete some of those, you just copied again, just to reinsert them for the second time?????
----------------------------------------
sqlloader procedure:
It might be worth to check if you can split the datafiles
into portions of 200.000 records and do the load in parallel.
Rebuild the delta table with increased FREELISTS to 5..10
and FREELIST GROUPS to 2.
Wait for all loads to complete before you call the procedure.
Hope this helps
Volker
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-28-2003 03:46 PM
тАО12-28-2003 03:46 PM
Re: Query Optimization.
i would also add..
You can also try using external tables which gives you these advantages:
o merge a flat file with an existing table in one statement.
o sort a flat file on the way into a table you want compressed nicely.
o do a parallel direct path load -- without splitting up the input file and so on
o run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)
o do multi-table inserts
o flow the data through a pipelined plsql function for cleansing/transformation
and so on.
They allow you to get data into the database without having to use sqlldr in the first place.
You would not normally query them day to day in an operational system, you use them to load data.
hope this helps too!
regards
Yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-28-2003 06:50 PM
тАО12-28-2003 06:50 PM
Re: Query Optimization.
I am using Oracle 8.1.7.0. Will merge command work on Oracle 8i. Could you please send me some sample.
2. sort a flat file on the way into a table you want compressed nicely.
How? I am not expert in PLSQL. Could you please send any sample.
3.run sqlldr in effect from a stored procedure or trigger (insert is not sqlldr)
How?
I hope you will not mind on my stupid questions.
regards
GSM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-28-2003 09:02 PM
тАО12-28-2003 09:02 PM
Re: Query Optimization.
these are not stupid questions at all.
The merge and external tables are features of the 9i Database.
Am sorry that this does not help you much.
best wishes and
regards
Yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-29-2003 06:41 PM
тАО12-29-2003 06:41 PM
Re: Query Optimization.
Regarding rollback too small , try increase optimal size also put transaction in large rollback segment may help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-04-2004 08:32 PM
тАО01-04-2004 08:32 PM
Re: Query Optimization.
to speedup your SQL, you should check the explain plan of your sql.
like:
sqlplus username/password
explain plan for
YOUR_SQL_HERE;
@?/rdbms/admin/utlxpls
you can consider creating a function index on your delta table which will speedup your tables.
Also consider create a global temporary table for your delta which will reduce your redo.