Operating System - HP-UX
1827894 Members
1741 Online
109969 Solutions
New Discussion

Re: Per process memory limit?

 
SOLVED
Go to solution
dictum9
Super Advisor

Per process memory limit?


I need to find out the process memory limit between two different systems.

One is a A500 running 11.00 and the other one is rx2620 running 11.23.

Also how do I trace the process memory usage?

15 REPLIES 15
RobinKing
Valued Contributor
Solution

Re: Per process memory limit?

James R. Ferguson
Acclaimed Contributor

Re: Per process memory limit?

Hi:

Process memory limit of what? Do you mean the data size, sstack size, shared memory limit? Is it a 32-bit or 64-bit process?

If so, examine the appropriate kernel parameters by comparing the output of 'kmtune' (on 11.0) and 'kctune' on 11.23. See the manpages.

Regards!

...JRF...
dictum9
Super Advisor

Re: Per process memory limit?

I got the output of kctune and kmtune, but which kernel variables do I look at?

Sandman!
Honored Contributor

Re: Per process memory limit?

The following kernel tunables would help in determining that:

maxdsiz - size of the process data segment
maxssiz - size of the process stack segment
maxtsiz - size of the process text segment
James R. Ferguson
Acclaimed Contributor

Re: Per process memory limit?

Hi:

You should primarily look at:

. maxdsiz & maxdsiz_64bit -> for the maximum data (heap) size

. maxssiz, maxssiz_64bit -> for the maximum stack size

. shmseg -> maximum number of shared memory segments per process

Calculating the shared memory components of a process leads immediately to an accounting dilemma --- which process or which processes get "charged". Examine the manpages for 'shmseg' to better understand what the number of segments means to the system and what controls the size of any segment.

If you really want to begin to understand memory at a process-level you need to look at :

http://docs.hp.com/en/1218/mem_mgt.html

http://docs.hp.com/en/5965-4642/ch01s03.html

http://docs.hp.com/en/B2355-60105/chatr_pa.1.html

http://docs.hp.com/en/B2355-60105/chatr_ia.1.html

Regards!

...JRF...
dictum9
Super Advisor

Re: Per process memory limit?


I couldn't get the bottom 3 links to work but the first one is real good stuff, thanks.
dictum9
Super Advisor

Re: Per process memory limit?

I had a ProC program crash again today. The memory usage got as high as 18892 in 1K blocks. Then it crashed. After restart, it's growing again. What are its normal parameters? Is 18892 definitely out of bounds?

Is there a kernel variable I can tune, or is this strictly a code issue? A memory leak has been suspected.


Mon Jul 30 10:47:04 EDT 2007

18892 14395 /apps/qms/bin/qms108a /bin/ON1108 /apps/qms/fims/upload /data/hnstest
1860 14016 /apps/qms/bin/qms108wrc /bin/ON1108wrc /data/hnswrc


Mon Jul 30 10:47:34 EDT 2007

1860 14016 /apps/qms/bin/qms108wrc /bin/ON1108wrc /data/hnswrc

James R. Ferguson
Acclaimed Contributor

Re: Per process memory limit?

Hi:

See my last response in your ProC thread. You can view your kernel settings with 'kctune' on 11.23 and 'fmtune' on 11.0. See the manpages for the various options.

Regards!

...JRF...
dictum9
Super Advisor

Re: Per process memory limit?

I got the settings, but not sure if they are sufficient or not:



Maximum data (heap) size

maxdsiz 1073741824 1073741824 Immed
maxdsiz_64bit 2147483648 2147483648 Immed

Maximum stack size

maxssiz 134217728 134217728 Immed
maxssiz_64bit 1073741824 1073741824 Immed

Process text segment size

maxtsiz 100663296 Default Immed
maxtsiz_64bit 1073741824 Default Immed

Maximum number of shared memory segments per process

shmseg 120 120 Immed
Bill Hassell
Honored Contributor

Re: Per process memory limit?

> I got the settings, but not sure if they are sufficient or not:

The question is hard to answer because it depends on the requirements of the program. Unfortunately, the majority of programmers never handle memory allocation requests properly. The programs simply ask for RAM and start using it without the slightest check as to success. So the program uses the non-existent memory and HP-UX kills the program.

> 18892 in 1K blocks

18 megs? That is nothing in terms of local data area. Your kernel limit is 1000 megs for 32bit programs and 2000 megs for 64bit programs. But these are nothing more than arbitrary fences. Run SAM and change the limits (maxdsiz and maxdsiz_64) to 2000 megs and 8000 megs, then try the program again.

I am ASSUMING that the environment ProC runs in has ulimit -m set to unlimited. I am also assuming that you are NOT seeing error messages on the console or in syslog that say you are out of swap space. You can run processes that use a lot more memory than you have RAM due to virtual memory (the swap area). Naturally, processes that are too big to fit will constantly swap in and out (but not crash) creating enormous performance delays.


Bill Hassell, sysadmin
skt_skt
Honored Contributor

Re: Per process memory limit?

kmtune|grep -i shmmax

shmmax limits the maximum amount of memory whihc an be occupied by a single process.

Example in a system with 8GB physcial memory setting to shmmax to 6GB is not recommended.In this case a single process takes that memory and only 2GB is left out for the entire other process.Also depends on requierment of app/db/etc...
dictum9
Super Advisor

Re: Per process memory limit?

I ran that UNIX95 command in a script writing to a log file.

The program crashed twice again and both times the memory had increased to 18892K just prior to crashing.

Which kernel variable would you recommend tuning based on this?
A. Clay Stephenson
Acclaimed Contributor

Re: Per process memory limit?

This seems to be related to your diropen() problem WHICH IS NOT A CRASH and should not be referred to as a CRASH. You only confuse the issue when you use that word. The correct terminology is that diropen() returned a NULL pointer and errno was set to xxx. Your memory usage so far has been trivial and I have yet to see errno set to ENOMEM. This memory wild goose chase is probably my fault because I simply mentioned that diropen() could return a NULL result for many reasons and that one of the possible reasons was that malloc() could not allocate additional memory --- that is certainly not the only reason. You have been told to capture errno immediately and you fail to do that. Bear in mind, that something as seemingly trivial as trying to output the value of errno could cause a system call to fail and give you the wrong results. I don't even like to use the function strerror() because I would much rather have the integer value.

In the example I supplied, I showed the use of on assign_errno() macro. You will note that as soon as diropen() returned a NULL pointer, I used this macro to copy errno to a local variable so that that value would not be altered by subsequent calls. This is the approach you need to take.
If it ain't broke, I can fix that.
skt_skt
Honored Contributor

Re: Per process memory limit?

memory leak you can observe through the kmeminfo tool.Collect an output every day and see whihc value is increasing.

In HP-UX 11.11 i had oberved the kernel arena ALLOCB_MBLK_SM is taking 15GB of memory and it is increasing everyday..
dictum9
Super Advisor

Re: Per process memory limit?

The issue is finally resolved.

The issue was that a file was being opened and not closed.

Thanks for all input.