Operating System - HP-UX
1832934 Members
2875 Online
110048 Solutions
New Discussion

Is there a documented limit on shm_nattach?

 
Gordon_10
Occasional Contributor

Is there a documented limit on shm_nattach?

I have two shared memory segments that are owned by the same user and have -rw---- permissions. One has 4 attachments and the other 6. I can attach a new process to the segment that has only 4 attachments but I cannot attach to the one with 6. Is there a limit to the number of attachments and is it documented somewhere? I have searched but cannot find a limit. Thanks in advance.
Strike while the iron is hot.
7 REPLIES 7
A. Clay Stephenson
Acclaimed Contributor

Re: Is there a documented limit on shm_nattach?

Generally the limit is shmseg - the per-process limit of shared memory segments. It would really help to know what errno is being set to and what system call is actually failing -- shmget() or shmat().

Another "gotcha" is trying to attach a shmseg that was originally created in 64-bit land to a 32-bit process. This can be made to work but you have to play by a very strict set of rules.
If it ain't broke, I can fix that.
Don Morris_1
Honored Contributor

Re: Is there a documented limit on shm_nattach?

There is a limit -- but it's kernel internal and not documented (which lets us raise it as needed). I can say that you aren't hitting it at 6... (we'd have to be in the 64k range before there was even a chance).

More likely that there's another reason you can't attach... at the least the error you get back from shmat() would help. Posting the pertinent sections of the code would be better (in case there's a typo in how you're getting the id or something).
Don Morris_1
Honored Contributor

Re: Is there a documented limit on shm_nattach?

shmseg isn't going to affect this -- it controls the number of segments per process -- not the number of processes per segment (which is what shm_nattach represents). You could have one segment attached to every single process on the system and be way above shmseg.
Gordon_10
Occasional Contributor

Re: Is there a documented limit on shm_nattach?

shmget() call errno 2.
Strike while the iron is hot.
Gordon_10
Occasional Contributor

Re: Is there a documented limit on shm_nattach?

Using the call:

shmid = shmget(key, SHMSZ, 0666)

Output of my test program follows:
------------------------------------
Current status for shmid 27678:
PID of Creator = 5890
PID 5890 is running
Command:bptm
Owner = bob
Key = 0
Size = 1048576
Current Attaches = 4
Last Change Time = Tue May 4 15:37:11 2004
Permissions: -rw------- (0x8180)
First 16 bytes of shared memory ID=35
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
Current status for shmid 30731:
PID of Creator = 4371
PID 4371 is running
Command:shmtest
Owner = bob
Key = 0x207c16e9
Size = 769470464
Current Attaches = 6
Last Change Time = Sun May 2 08:57:37 2004
Permissions: -rw------- (0x8180)
shmtest(0x207c16e9): errno=2 for shmget(), No such file or directory
----------------------------------------
As you can see I own (bob does) both segments but the second attach fails.
Strike while the iron is hot.
A. Clay Stephenson
Acclaimed Contributor

Re: Is there a documented limit on shm_nattach?

Well, this test strongly suggests that the key you are supplying no longer exists. Are you using ftok to generate the key or is this a hard-coded constant.

One that I note is that you are not using a zero size (unless your SHMSZ constant is zero) to get an existing shm segment.
If it ain't broke, I can fix that.
A. Clay Stephenson
Acclaimed Contributor

Re: Is there a documented limit on shm_nattach?

Well, this test strongly suggests that the key you are supplying no longer exists. Are you using ftok to generate the key or is this a hard-coded constant.

One thing that I note is that you are not using a zero size (unless your SHMSZ constant is zero) to get an existing shm segment.
If it ain't broke, I can fix that.