Operating System - HP-UX
1753797 Members
7988 Online
108805 Solutions
New Discussion юеВ

create & load table dynamically

 
Edgar_8
Regular Advisor

create & load table dynamically

Hi,

Does anyone know how one can load tables dynamically within sql loader. What I mean is, say we get CSV
files (format ie.yyyymmddhhmi.dat)& within them records are having a date/time value (format ie.yyyymmddhhmiss),
can & how can sql loader dynamically read this value, query the DB schema to see if a table with this format
exists & load the data into it or alternatively if the table does not exist first create the table & then
load the data.

Or alternatively does anyone know how the above could be done within a script?

Your assistance is most appreciated.

Thanks in advance!
5 REPLIES 5
Yogeeraj_1
Honored Contributor

Re: create & load table dynamically

Hi,
are you running 9iDB? It is much easier than in 8i.

If yes, you can use the external tables facility as an alternative to SQLLoader...

Here is what you need to do:
For external tables, you need to use a directory object

Assuming your file is in the /tmp directory we do the following mapping:

create or replace directory data_dir as '/tmp/'
/

Then you have to create the external table.
You will observe that part of its definition is what looks like a control file

create table external_table
(EMPNO NUMBER(4) ,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7, 2),
COMM NUMBER(7, 2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory data_dir
access parameters
( fields terminated by ',' )
location ('yd.dat')
)
/

In /tmp/yd.dat you have a file that looks like this:

369,RAJ,CLERK,902,17-JUL-04,800,,20
499,KAM,SALESMAN,698,20-JUL-04,1600,300,30
521,STEVEN,SALESMAN,698,22-JUL-04,1250,500,30
....

(its a dump of the yd table in csv format)


Now you can just:

select * from external_table
/

Doing a direct path load would simply be:
insert /*+ append */ into yd select * from external_table;

hope this helps!

regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Edgar_8
Regular Advisor

Re: create & load table dynamically

Hi Yogeeraj,

Believe it or not I have already created an external table, but Im still puzzled at how I can dynamically update the external table since approx. 5000 files are needed to be loaded per day? Can one specify within the "create table" of external table to read multiple files as they are dropped in the source directory?

Thanks in advance!
Yogeeraj_1
Honored Contributor

Re: create & load table dynamically

hi edgar,

sorry i misunderstood the real problem. I believe SQLLDR will be the easiest option.

can you please try this example that will allow you to use a single ctl file to load all of the .dat files in a directory:
===========================================
#!/bin/sh
#
# Load multiple files found in the directory
# usage: ./loadmultfiles.sh *.dat or ./loadmultfiles.sh YD*.dat
#

export ORACLE_HOME=/d01/product/ora817
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_SID=yddb
until [ -z "$1" ]
do
sqlldr yd/yd data.ctl data=$1
echo Loaded $1
shift
done

echo Done.
exit 0
===========================================

please let me know. don't have a server at hand to test right now.

regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
R. Allan Hicks
Trusted Contributor

Re: create & load table dynamically

I may be doing somethings similar to your application. See if the following might have something you can adapt to your needs.

I have an application where a remote machine ftp's a particular file name to a particular directory.

A PERL scripts run periodically and checks the directory for the pre-agreed upon file names.

When the PERL detects the file that it handles (one PERL script for each file name), it moves the file from the ftp location to the external table directory.

There it is uncompressed and the PERL script kicks of sqlplus with a script to read the external table and apply business rules to the data and based upon those rules place it into an internal table.

Once you have a working external table defined, you can replace the original flat files with new flat files and the external table changes to have the new data. You just have to co-ordinate the sql access to the table so that you don't have the data changing under a process. I use the scripts to co-ordinate the changes. One script handles moving and invoking the sql script to process the data. No other sql process touch the external table. Likewise, only one script watches for the arrival of the flat file. This way a new file is not started until the current one is complete processed.

I have another application that moves data from a crude database (just a group of flat files really) to Oracle.

A PERL script reads the flat file directory and gets the timestamp of each file.

The PERL script passed a parsed version of the file name to a sql script. (The file name is really and intelligent key. Please don't jump on me for having an intelligent key. A third party vendor designed the flat file database. Intelligent keys can be a real bad idea.)

The sql script gets the update time for the rows in the internal table that correspond to the flat file. The internal table has an insert/update trigger that track the row's modified time.

If the flat file is newer than the internal table data, the flat file is copied to the the external table directory and another sql script runs to reconcile the changes between the external table and the internal table.

As the internal table is updated, the triggers fire and the rows have the new modified dates so that it won't update it again.

I probably should modify the code to disable the trigger while the reconcilation process is going on and replace the modified date with the file's timestamp. Oh well something to do for the next version.

As to your application......

I think I'm hearing that you want to move filedata into a table where the table name is based on the timestamp?

You might consider having an external table with a fixed name(ex: my_external_table and the external table directory already set up. In this example, I called it external_tables). You might have a script like this...

make_table 200406300920.dat

make table's contents...

#!/bin/contrib/bin/perl -w

system("cp $1 $ORACLE_HOME/external_tables/my_external_table.txt");

#prune off the .dat
$my_table=substr(ARGV[0],1,14);

# note: I don't know if oracle likes tables names that start with a number
#so I added an arbitary letter(D) to the beginning of the table name

system("sqlpus scott/tiger @table_builder.sql D$my_table");

table_builder.sql has....

create table &1
as (select * from my_external table);


This should create a table D20040630920 with the same structure as the external table.

I hope that this helped.
"Only he who attempts the absurd is capable of achieving the impossible
Edgar_8
Regular Advisor

Re: create & load table dynamically

Hi Allan,

Thanks for the feedback, I think you have the processes I need to implement. Would it be possible to share the
source code of your process which does the reconciliation, provided its not propriety/intelectual property
though.

Thanks in advance!