1752803 Members
5756 Online
108789 Solutions
New Discussion юеВ

MMAP failures

 
SOLVED
Go to solution
Aru
Occasional Contributor

MMAP failures

I am getting mmap failures in my application. It is small multithreaded application implemented using pthreads. Each thread maps small file(700 bytes), reads data and unmaps it. I am using mmap system call for mapping with PROT_READ | PROT_WRITE flag for prot parameter, MAP_SHARED flag for flags parameter. I am using munmap system call for unmapping file. I am using aCC compiler of version HP ANSI C++ B3910B A.03.55. I am using HPUX11iv1 on PA-RISC architecture. I am having enough memory(2 GB RAM), but I am getting ENOMEM error while mapping.
6 REPLIES 6
Peter Godron
Honored Contributor

Re: MMAP failures

Aru,
is this a repeat of your previous posting ?
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1037936

Same application, or has something changed?
Aru
Occasional Contributor

Re: MMAP failures

Peter,
Its different application. I tried tunning kernel parameter also. I reduced maxdsiz parameter. I increased maxssiz & maxtsiz parameter. But still I get error during mapping. I am attaching code with this thread.
Don Morris_1
Honored Contributor
Solution

Re: MMAP failures

This has nothing to do with maxssiz - you should leave it alone (increasing it decreases space for other private objects).

You're using MAP_SHARED -- so these go in the Shared quadrant(s).

First question -- why? If you're only reading data, you don't need your changes to be preserved to the file itself (a big reason not to use MAP_PRIVATE) and if your model is only MT via pthreads, all the threads can see the mmap segment anyway. [Which also begs the question... if this is the same file why not just mmap() it *once* in the first place, MAP_PRIVATE and let all the child threads read using that pointer? If these are multiple different files, then multiple mmaps makes sense... you just aren't clear above]. If only one thread needs to keep track of a mapping at a time (i.e. thread maps file X, reads, unmaps X) then MAP_PRIVATE is more than sufficient.

Setting all that aside -- if you in fact need MAP_SHARED, then you're likely running into virtual address space exhaustion in the shared quadrants. You're on PA, so using defaults, you have 1.75Gb of shared address space shared among every process on the system (64-bit shares this plus additional space, 32-bit is limited to only this). Each mapping will take a system page size of that sysconf(_SC_PAGE_SIZE), so with the stock 4k that gives 458,752 maximum possible mmaps extant at any given time.

However, shared libraries and other shared objects will be sitting in that space. (Search the forums for shminfo -- there's a leaked URL). If you must stay with MAP_SHARED and can't handle exhaustion... [and might I add, if you want to do anything else with your system while this runs... because when you exhaust the Global shared virtual address space, you're going to fail on every other app trying to make an object there... which will likely be an application or command you care about], I'd look into using Memory Windows to isolate your process into a separate shared address space (at least for Q3).

Again -- I'm unclear on why you need MAP_SHARED in the first place and think you'd be better off using MAP_PRIVATE (and likely some linker flags... you should link with both text and data in the first quadrant (I believe that's EXEC_MAGIC) and you may need to go with Q3 private to give yourself enough private address space.

Even better -- go 64-bit and get 4Tb (minus maxssiz_64bit) to work with. Thats your best bet.

Alternately, your algorithm could handle resource exhaustion cleanly and have the threads pause and then retry (to give time for other threads to release address space) or fail cleanly. In any event, I highly recommend you rethink exhausting the global shared address space.
Don Morris_1
Honored Contributor

Re: MMAP failures

Sheesh... and for some reason I completely missed you included the code.... but this has to be a trivial example (Because you're not even actually reading the file, just mmap / munmap racing).

If the filename stuff is accurate though -- then you are mmap'ing the same file... and again, I have to ask "Why?" Are you just trying to benchmark mmap/munmap or something? All of your threads can see a mmap in the process address space anyway (that's why you're using threads, not multiple processes!)...
Aru
Occasional Contributor

Re: MMAP failures

Morris, Thank you for nice explanation about mmap system call internals. Sample code I attached is not actual application. Actual application has configurable number of processes. Each process has again configurable number of threads. A handsome amount data lies in multiple files. Its client-server application. Server identify data file based on client request. whether read-only mapping or read-write mapping again depends on client request. To find root cause, I wrote above sample program where pool of thread maps and ummaps file. When I tried running this sample code with 2 threads, I got mmap failure.
Don Morris_1
Honored Contributor

Re: MMAP failures

Ok... stating this as happening with only 2 threads made me realize your problem is more fundamental (sorry -- it isn't always easy to remember 11.11).

On HP-UX current releases, mmap() of the same file to the same address in the same process (shared, private or otherwise) will *always* fail with ENOMEM. [Except for MAP_IO, which supports 'nested' mmap]. What you want to make this model work is MPAS [11.23 and higher, IPF only... PA doesn't do this because the hardware won't support it] where each mapping request will actually generate a new unique mapping. There's no way to do things in this fashion on 11.11.

I would think if you keep a central table indexed by file descriptor so you can check if each thread is already mapped, and keep a hold count [and probably a per-fd lock if you want to synchronize...] such that only when the hold count drops to 0 you do the unmap and conversely, the first holder does the mapping you'll be ok. You have to have someplace to do the fd --> mapping address translation in any event.