Online Expert Day - HPE Data Storage - Live Now
April 24/25 - Online Expert Day - HPE Data Storage - Live Now
Read more
System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

ulimit and coredump on 11.23

donna hofmeister
Trusted Contributor

ulimit and coredump on 11.23

I have a vpar running 11.23.

Users are wanting to limit (their) core files to no more than 8Gb.

If I say: ulimit -c 16777216

I get "sh: ulimit: The specified number is not valid for this command". Further experimenting shows that 8 million-ish blocks (which is 4Gb-ish) is the largest number I can give.

Having combed through the itrc, am I right in thinking there is no kernel tuning I can do for coredumps? Am I also right in thinking that if I say 'unlimited' then the core file would be as large as the filesystem allows (constrained by available space, of course)?
3 REPLIES
Steven E. Protter
Exalted Contributor

Re: ulimit and coredump on 11.23

Shalom,

The number is too large.

Try unlimited if you want really large numbers.

ulimit -c unlimited

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
donna hofmeister
Trusted Contributor

Re: ulimit and coredump on 11.23

yes, I know 16 million blocks is too large :-)

Do you know what controls it?
Dennis Handly
Acclaimed Contributor

Re: ulimit and coredump on 11.23

There are only two values of core files sizes. Either 0 or unlimited. If you try picking a number in between, it isn't likely to be useful at all.

But if you have a mix of applications that abort, and most of them are not huge, you can use other values for ulimit -c.

>I know 16 million blocks is too large

This is 8 Gb.