Operating System - HP-UX
1751858 Members
5905 Online
108782 Solutions
New Discussion юеВ

Re: Large amount of data loading into ORACLE database

 
SOLVED
Go to solution
Indira Aramandla
Honored Contributor

Re: Large amount of data loading into ORACLE database

Hi Alex,

Yes as Jean mentioned using sqlldr utility with direct=true option is a method to load data into table. Direct path load is much faster than conventional path load, but entails several restrictions.

A direct path load parses the input records according to the field specifications, converts the input field data to the column datatype, and builds a column array. The column array is passed to a block formatter, which creates data blocks in Oracle database block format. The newly formatted database blocks are written directly to the database, bypassing most RDBMS processing. During a direct path load, some integrity constraints are automatically disabled. When the load completes, the integrity constraints will be reenabled automatically if the REENABLE clause is specified
During a direct path load, performance is improved by using temporary storage. After each block is formatted, the new index keys are put to a sort (temporary) segment. The old index and the new keys are merged at load finish time to create the new index. The old index, sort (temporary) segment, and new index segment all require storage until the merge is complete. Then the old index and temporary segment are removed.
You can improve the performance of direct path loads by presorting your data on indexed columns. Presorting minimizes temporary storage requirements during the load. Presorting also allows you to take advantage of high-performance sorting routines that are optimized for your operating system or application

You can use the index maintenance options availabe with sqlldr
SKIP_INDEX_MAINTENANCE={YES | NO} stops index maintenance for direct path loads but does not apply to conventional path loads. It causes the index partitions that would have had index keys added to them instead to be marked Index Unusable because the index segment is inconsistent with respect to the data it indexes. Index segments that are not affected by the load retain the Index Unusable state they had prior to the load. Avoiding index maintenance is quite reasonable when the number of rows to be loaded is large compared to the size of the table

And you said that you are loading the data and at the same time retrieving the data. And you also mentioned that you had truned off the archive logging off. So I assume that the users are only quering the data from say 7 AM to 6 PM during the day, then you can collect the data from your source into flat / ascii files and schedule the dataload at different intervals of time. Say early in the morning, then one set of data in the afternoon and then later on in the evening and night. Ro you could also use john├в s method of current and old data in two tables instead of 10 tables.

Never give up, Keep Trying
Yogeeraj_1
Honored Contributor
Solution

Re: Large amount of data loading into ORACLE database

hi alex,

have you considered partitioning the table?

Rebuilding of index is a very costly activity, unless you have real good reason, it is not advisable. If you have properly set the PCTFREE for the table and use Locally Managed tablespaces, there should not be any major problems.

Are you also, recalculating your statistics (if you are using the Cost-based optimizer, of course)?

There are also new features in Oracle 9i and 10g conserning data upload...

hope this helps too!

regards
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Alex Lavrov.
Honored Contributor

Re: Large amount of data loading into ORACLE database

Thatnk you for your help, we'll try all these things.
I don't give a damn for a man that can only spell a word one way. (M. Twain)