1828846 Members
2729 Online
109985 Solutions
New Discussion

core file dump

 
SOLVED
Go to solution
trystan macdonald
Occasional Advisor

core file dump

Hi,
I have a core file dump with the following messages:
Memory allocation failed during locale processing.
Out of memory.
memory_resident
Out of memory.
error: Memory allocation failed
error: Memory allocation failed
error: Shared memory allocation failed
error: Shared memory attach failure
error: Shared memory lock failure
Signal 7: not enough memory available
Signal 7: not enough memory available
Error<2>: Out of memory while tracing stack.
Error<3>: Out of memory while tracing stack.
Out of memory while reading in symbol table of %s
Error<1>: Out of memory while tracing stack.
Error<1>: Out of memory while tracing stack.

Does anyone know if this is a kernel issue or if the memory / virtual memory space or swap space needs increasing?
Thanks in advance,
Trystan.

7 REPLIES 7
harry d brown jr
Honored Contributor

Re: core file dump

Trystan,

yes to your question(s).

Post the output from
swapinfo -mt
kmtune

and also, what OS level, and # of bits (32 or 64) ?

live free or die
harry
Live Free or Die
Bill McNAMARA_1
Honored Contributor

Re: core file dump

depends.. where/how did you get that output?
strings or a debugger??

Is there anything in the syslog.log at the time of core generation?

what does a
what core
and
file core
produce?

If it worked before and you changed nothing memory related (ie patches etc) it's not memory related.. possibly app configuration.

Later,
Bill
It works for me (tm)
trystan macdonald
Occasional Advisor

Re: core file dump

Harry,

I've attched the info.

Thanks,

Trystan.
T G Manikandan
Honored Contributor
Solution

Re: core file dump

I will suggest two things

1.For 6Gb memory on the machine the dbc_max_pct is high.
you can bring that to 8.
Dbc_max_pct is the dynamic buffer cache allocation parameter.
2.You should increase the
maxssiz,maxdsiz parameter.

These parameters which defines the stack size and data size for a process.
The default values of the parameter are 64MB.
Increase this.

If you are using a application whose executable is 64 bit then you should increase maxssiz_64 and mazdsiz_64


Revert
trystan macdonald
Occasional Advisor

Re: core file dump

How should the maxssiz,maxdsiz parameters be changed? Is this a trial and error process i.e. increase by 10% and monitor behaviour... or are there guidelines on how to set these parameters with regard to the application running on the machine?
Steven E. Protter
Exalted Contributor

Re: core file dump

First, if new with kernels don't be afraid to use sam. Its a crutch but it helps.

I'd go to the application vendor and check the whole kernel.

Oracle did something like this to me (what has it NOT done to me?) and it turned out several kernel parameters were below the specs in the install guide.

Guess I should have read more carefully.

Steve
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
A. Clay Stephenson
Acclaimed Contributor

Re: core file dump

Warning these are general rules only:

For almost all application, a maxtsiz of 256MB is plenty.
A maxssiz of 64MB is very generous and typically 32MB is plenty. Large stack size requirements imply poorly written code. The most difficult to judge is maxdsiz because program can validly require large amounts of dynamic memory (1GB for 32-bit processes and really big for 64-bit processes). No resources are consumed by setting these to large values BUT you do open the door for a single process grabbing all the system resources. You should also consider \the maximum size of a single shared memory segment limited by shmmax.
If it ain't broke, I can fix that.