1753797 Members
7247 Online
108805 Solutions
New Discussion юеВ

Core Size

 
QUIRICONI
Occasional Advisor

Core Size

Hi all,

I've seen some messages about how to minimize the size of a core file...
I have the same problem, a core is generated
on my platform but is very huge ! (around 700 Mb) if I'm redirecting this core with those commands :
ln -s /dev/null core
or ulimit -c 0

can I have a problem of swap or any memory problem ?
I mean if my core is generated in RAM and not
written in a filesystem is there an impact for my platform ???

Thanks for reply,
Jean-Remi QUIRICONI.
7 REPLIES 7
A. Clay Stephenson
Acclaimed Contributor

Re: Core Size

You know that you are fixing the symptoms and not really addressing the problem but limiting the size of the core file but no there is no harm here other than the dump data itself is lost for debugging purposes. The data is not written to RAM but rather to the null device. In that respect, it's really no different than writing to any other device (tape, disk, ...).
If it ain't broke, I can fix that.
Jeff Schussele
Honored Contributor

Re: Core Size

Hi Jean-Remi,

If I understand your question correctly, then the answer is no. Because the core file IS what was in memory when the process was dumped.
So it was already there in memory. Now the system is writing that image to disk due to a failure or a direct request to dump.
When you redirect core to /dev/null it simply vanishes into the ether. It does not go back to memory nor the disk.

HTH,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
Robert Gamble
Respected Contributor

Re: Core Size

I think you are trying to prevent core files from filling up yuour file systems. If this is the case, I suggest you create a zero byte sized file3 named core, then change its permissions so it cannot be overwritten or moved by non-root users.

(as root)
# cd /$DIR
# touch core
# chmod 444 core

Of course, if core files *do* represent an issue that should be dealt with, so this is just a quick-dirty way of dealing with a larger issue.

Hope this is what you were looking for ...
Kent Ostby
Honored Contributor

Re: Core Size

Some martian keeps getting a 700KB spam email. At least I think thats where /dev/null stuff gets sent. :-)

Your approach as noted above should cause no issues except that you cant fix the problem.

If you dont know where the core is coming from, you can let it core dump once and type 'file core' to find out.

Another alternative for limiting the size of the "core" file is to simply create a sub-directory named "core".

This is my method of choice.

Best regards,

Kent M. Ostby
"Well, actually, she is a rocket scientist" -- Steve Martin in "Roxanne"
QUIRICONI
Occasional Advisor

Re: Core Size

thank you for your replies,

in fact I know the problem of the core, but I need to analyse other files generated during the core (some log files).
but my core file is generating a file system full, so I need to decrease it to investigate.

Jean-Remi.
Rob_132
Regular Advisor

Re: Core Size

Sometimes,

strings core|more

will give a hint as to what program generated the core dump in the first place - no guarantees....but this may be a logical first step towards troubleshooting this real issue.

Rob
Jeff Schussele
Honored Contributor

Re: Core Size

Hi (again) Jean-Remi,

You can mitigate the problem by changing the working directory to a dir with enough free space *before* starting the offending process.

Can do it either manually:
cd /new/larger/dir
/opt/your/process

Or virtually:
export PWD=/new_larger_dir
/opt/your/process

Because core is dumped to the $PWD environment variable value, you can force the dump to wherever you wish by explicitly setting it.

HTH,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!