General
cancel
Showing results for 
Search instead for 
Did you mean: 

Memory Fault (coredump) on Oracle data import

John Love_3
Regular Advisor

Memory Fault (coredump) on Oracle data import

I'm running a job to import data from an HP3000 to Oracle.
Some time into the 8 million records I get a "Memory fault(coredump)" message.
What are the likely reasons?
Keep in mind this is an N class with only 2GB of ram. Only using the 2 18GB onboard drives. HPUX11.0 patched to June patch bundle.
What can be done and what is the likely culprit here?

Thanks in advance to all!
7 REPLIES
A. Clay Stephenson
Acclaimed Contributor

Re: Memory Fault (coredump) on Oracle data import

Hi John:

You don't give a lot to go on and you failed to mention the Oracle version but I did have a similar problem with Oracle 8.0.x imports. There were some errors in the alert.log file and in the trace files which lead me to the problem. In my case, there was a bug which caused io_slaves to fail under heavy io. Commenting out the following init.ora entries fixed me:
#disk_async_io=false
#tape_async_io=false
#dbwr_io_slaves=4
#lgwr_io_slaves=8
#arch_io_slaves=4

You will probably have better luck using MetaLink at support.oracle.com.

Regards, Clay
If it ain't broke, I can fix that.
G. Vrijhoeven
Honored Contributor

Re: Memory Fault (coredump) on Oracle data import

Hi,

I think your problem is kernel related. Check out the kernel parameter settings:

# sysdef

here is a manual wich explanes the settings:

http://docs.hp.com/hpux/onlinedocs/os/KCparams.OverviewAll.html


Hope this will help,

Gideon

Tom Geudens
Honored Contributor

Re: Memory Fault (coredump) on Oracle data import

Hi,
8 million ...
My guess would be that you're running into a parameter like maxdsiz. On our datawarehouse server (8 million in one go, must be datawarehouse or something like it :-) this is set to 256Mb.
Have you tried using a "direct load" (and rebuilding indexes afterwards) ?

If that doesn't work ... try splitting up the inputfile (if possible).

You probably tried al these things already ... but I thought them worth mentioning.
Tom Geudens
A life ? Cool ! Where can I download one of those from ?
John Love_3
Regular Advisor

Re: Memory Fault (coredump) on Oracle data import

Sorry, I'm not much of an Oracle guy.

The version is 8.1.6 (32bit)

The Kernel parameters should have been altered per Oracle's rcommendations.

I'll try breaking up the file, and looking at the io_***** entries.

Thanks again.
Gatis Visnevskis
Occasional Advisor

Re: Memory Fault (coredump) on Oracle data import


Of course it is "maxdsize", that is by default 67Mb.

1) use SAM to set following parameters:
maxdsize=256Mb
maxssize=256Mb

2) It is very likely, that you have memory leak in oracle process. Consider patching oracle to 8.1.6.2 or 8.1.6.3 (32-bit)

3) anyway you must set kernel parameters to 1) sooner or later, because latest oracle versions are re-linking executables. re-linking will fail with insufficient memory/swap and default kernel parameters.

NT is faster from 0 to 60, but you need UNIX to get to 100
Mark Greene_1
Honored Contributor

Re: Memory Fault (coredump) on Oracle data import

have you tried running "strings" against the core file?

--
mark
the future will be a lot like now, only later
John Love_3
Regular Advisor

Re: Memory Fault (coredump) on Oracle data import

Gadis,
Still got the core dump about half way through (4 million records)after changing the maxdsize to 256MB. It was at 67 as you suggested. Thanks for that.

Mark, I hadn't done strings on the core file, but will.

Also the version of Oracle is 8.1.7 (my mistake before).

Any other ideas?