Operating System - Linux
1752494 Members
5231 Online
108788 Solutions
New Discussion

Reg:proc stat file status info on dumping core

 
Rohith M
Occasional Contributor

Reg:proc stat file status info on dumping core

Hi,

1. The process of sumping core can be prevented with profile settings. What are the profile settings which would prevent dumping of core?

2. The start of dumping of core is said to be controlled by application and affected by environment and OS settings. How is start of core dump affected by application / environment / OS settings? Could you elaborate on this, i.e what are the environments , what are the settings ?


My issue : The status of proc stat file is set to "dumping core " but the actual core is dumped after 45 mins. Here, there is no application code which does any action before dumping core. Will not the OS dump core immediately and not wait for response from application ?

Thanks,
Rohith
1 REPLY 1
Matti_Kurkela
Honored Contributor

Re: Reg:proc stat file status info on dumping core

1.) "ulimit -c " sets the maximum size of a core dump file for the current session (usually a login shell). All the processes started from this session will inherit this value. This is a soft limit: another command can increase the limit back to original value.

"ulimit -H -c " is the same, but as a hard limit. Subsequent commands in the same session can never increase a hard limit, only decrease it.

If the core dump maximum size is set to 0, the core dumps are completely disabled.


2.) Any application may be programmed to make ulimit(), getrlimit(), setrlimit() and/or sysconf() system calls, which change the same limit values as the "ulimit" shell command.
The application can also start the core dump process by using the abort() system function.

/proc/sys/kernel/core_* pseudo-files control how the core files are named. More information is available in Linux kernel source documentation. (linux-/Documentation/sysctl/kernel.txt)

Regarding your issue:
When the kernel decides to dump a process into a core file, the process is no longer allowed to execute any instructions - effectively, it's frozen in the moment that caused the core dump. The resources assigned to the process are cleaned up by the kernel, just as if the process was killed using "kill -9". Instead of just freeing the memory used by the process, the kernel dumps an image of all memory pages allocated to the process and some information about the process's state.

Any process can install customized error handlers for signals (kernel events) that usually cause a core dump, to do some things instead of or in addition to a standard core dump.

The delay of 45 mins in the dumping of the core is surprising. Could the process have been holding open a very large amount of files/network connections/other resources? Or was the machine severely overloaded (insufficient physical memory, "thrashing" i.e. swapping a lot and not getting much done)?
MK