- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- core file dump
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 04:57 AM
12-17-2002 04:57 AM
I have a core file dump with the following messages:
Memory allocation failed during locale processing.
Out of memory.
memory_resident
Out of memory.
error: Memory allocation failed
error: Memory allocation failed
error: Shared memory allocation failed
error: Shared memory attach failure
error: Shared memory lock failure
Signal 7: not enough memory available
Signal 7: not enough memory available
Error<2>: Out of memory while tracing stack.
Error<3>: Out of memory while tracing stack.
Out of memory while reading in symbol table of %s
Error<1>: Out of memory while tracing stack.
Error<1>: Out of memory while tracing stack.
Does anyone know if this is a kernel issue or if the memory / virtual memory space or swap space needs increasing?
Thanks in advance,
Trystan.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 05:11 AM
12-17-2002 05:11 AM
Re: core file dump
yes to your question(s).
Post the output from
swapinfo -mt
kmtune
and also, what OS level, and # of bits (32 or 64) ?
live free or die
harry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 05:23 AM
12-17-2002 05:23 AM
Re: core file dump
strings or a debugger??
Is there anything in the syslog.log at the time of core generation?
what does a
what core
and
file core
produce?
If it worked before and you changed nothing memory related (ie patches etc) it's not memory related.. possibly app configuration.
Later,
Bill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 05:54 AM
12-17-2002 05:54 AM
Re: core file dump
I've attched the info.
Thanks,
Trystan.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 06:07 AM
12-17-2002 06:07 AM
Solution1.For 6Gb memory on the machine the dbc_max_pct is high.
you can bring that to 8.
Dbc_max_pct is the dynamic buffer cache allocation parameter.
2.You should increase the
maxssiz,maxdsiz parameter.
These parameters which defines the stack size and data size for a process.
The default values of the parameter are 64MB.
Increase this.
If you are using a application whose executable is 64 bit then you should increase maxssiz_64 and mazdsiz_64
Revert
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 07:00 AM
12-17-2002 07:00 AM
Re: core file dump
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 01:37 PM
12-17-2002 01:37 PM
Re: core file dump
I'd go to the application vendor and check the whole kernel.
Oracle did something like this to me (what has it NOT done to me?) and it turned out several kernel parameters were below the specs in the install guide.
Guess I should have read more carefully.
Steve
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2002 01:44 PM
12-17-2002 01:44 PM
Re: core file dump
For almost all application, a maxtsiz of 256MB is plenty.
A maxssiz of 64MB is very generous and typically 32MB is plenty. Large stack size requirements imply poorly written code. The most difficult to judge is maxdsiz because program can validly require large amounts of dynamic memory (1GB for 32-bit processes and really big for 64-bit processes). No resources are consumed by setting these to large values BUT you do open the door for a single process grabbing all the system resources. You should also consider \the maximum size of a single shared memory segment limited by shmmax.