Operating System - Linux
1753792 Members
7194 Online
108799 Solutions
New Discussion юеВ

Problem with mmap() function.

 
SOLVED
Go to solution
Aru
Occasional Contributor

Problem with mmap() function.

Sometimes mmap() system call return NULL string, when I checked system error message I found "Not Enough space". I checked my RAM, swapspace and diskspace. It has enough space, but still it fails. It happens sometimes only. Thanks in advance.
8 REPLIES 8
James R. Ferguson
Acclaimed Contributor

Re: Problem with mmap() function.

Hi Aru:

Based on the description, I presume that the errno is ENOMEM. If this is the case, then there is probably not enough apace in the process address space to accomodate mapping the file.

Regards!

...JRF...
spex
Honored Contributor

Re: Problem with mmap() function.

Aru,

See "How much memory can a process use?":

http://www.faqs.org/faqs/hp/hpux-faq/section-142.html

maxdsiz may have to be increased, which means recompiling your kernel.

PCS
Aru
Occasional Contributor

Re: Problem with mmap() function.

spex,
I tried increasing maxssiz and maxtsiz to maximum value but still fails with same error. This error is occuring sometimes only. For same file 2/3 times mmap() call will succeed then 3/4 time it will fail.
spex
Honored Contributor

Re: Problem with mmap() function.

Aru,

Did you increase maxdsiz? Did you recompile the kernel and then reboot?

PCS
Don Morris_1
Honored Contributor

Re: Problem with mmap() function.

You really need to give more data here. What are the arguments to mmap()? [Most importantly, is this MAP_SHARED or MAP_PRIVATE? MAP_FILE or MAP_ANONYMOUS?]

If this is MAP_PRIVATE do _not_ raise maxssiz -- you're hurting yourself if you do. All private heap/mmaps/stack comes from the same area of private address space. Increasing maxssiz means you've reserved more of that address space for the stack -- and you therefore have less for heap or mmaps.
A. Clay Stephenson
Acclaimed Contributor

Re: Problem with mmap() function.

Mmap is limited by shmmax so that is the value that needs to be increased. As mentioned, you need to decrease massiz (although that is not your problem with mmap) because dynamic memory allocation (malloc, calloc, sbrk, et al) allocate memory from the same quadrant that is used for the stack. Even if the run-time stack never even approaches maxssiz, the data segment is reduced by maxssiz. If this is a 32-bit application (and I strongly suspect that it is) then there is almost never a need for maxssiz to exceed 64MB --- and that is very generous. Even in a 64-bit application, 64MB is plenty for a stack except for very poorly written code.
If it ain't broke, I can fix that.
Mike Stroyan
Honored Contributor
Solution

Re: Problem with mmap() function.

You need to read the section on "PA-RISC Architecture" at the bottom of the mmap manual page at
http://www.docs.hp.com./en/B2355-60103/mmap.2.html

The way that the system works does not allow multiple mappings of one file at different addresses. That means that an mmap call using MAP_SHARED can get ENOMEM because other processes have mapped that file and the initial mapping requires your process to put a new mmap mapping in an address range that your process is already using. Or your process may be unable to map a new range in a file because it would require addresses that some other shared mapping is already using.

If you are on an Itanium platform with the HP-UX 11iV2 release, then you can use the "chatr +as mpas" setting to avoid these limitations. The mpas feature is described in detail at
h20338.www2.hp.com/hpux11i/downloads/aas_white_paper.pdf
Aru
Occasional Contributor

Re: Problem with mmap() function.

Hi all,
Thank you for your suggestions. In Mike's link, I found that, process using memory map using mmap() function should close it with munmap() function without fail before process/thread goes off.It will not allow other process/thread to open new memory map for same file. My Application contains many threads, one thread doesn't close memory map. When I fixed it, I found problem never occurred any more. Thanks a lot.