System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

Two app servers, one with more "system" memory utilization

 
Adam Garsha
Valued Contributor

Two app servers, one with more "system" memory utilization

I've got a couple hp-ux 11.23 app servers, one of them is using Gigs more "system" memory than the other one. The two nodes run the same app and by running misc counts against ps, they look pretty much same-same from a user space process perspective.

Is there any way to tell why (or what) one of the blades is using so much more "system" memory than the other?

One theory I have is that since one of the nodes is a veritas CFS "master" that that is cause, but I can't think of a way to prove or see that that is true.

Any ideas?
9 REPLIES
SUDHAKAR_18
Trusted Contributor

Re: Two app servers, one with more "system" memory utilization

can you pls compare the kernel parameters on both the servers.

use #kctune -s to list the kernel parameters
Adam Garsha
Valued Contributor

Re: Two app servers, one with more "system" memory utilization

Looks like the only difference is "semmni", could that be a contributor or a red-herring? Looking at number of active semiphores using ipcs, shows the same count across all 4 nodes.

#prompt># for i in 1 3 5 7 ; do
> ssh its-ps${i} "hostname; kctune -S; echo"
> done
its-ps1
Tunable Value Expression Changes
max_thread_proc 1200 1200 Immed
maxfiles 4096 4096
maxfiles_lim 4096 4096 Immed
maxssiz 67108864 67108864 Immed
maxuprc 3000 3000 Immed
msgmap 2048 2048
msgmax 131072 131072 Immed
msgmnb 131072 131072 Immed
msgmni 1024 1024
msgseg 32767 32767
msgtql 2050 2050
nproc 8196 8196 Immed
nstrpty 60 60
semmni 2048 2048
semmns 8192 8192
semmnu 8192 8192
swchunk 2560 2560

its-ps3
Tunable Value Expression Changes
max_thread_proc 1200 1200 Immed
maxfiles 4096 4096
maxfiles_lim 4096 4096 Immed
maxssiz 67108864 67108864 Immed
maxuprc 3000 3000 Immed
msgmap 2048 2048
msgmax 131072 131072 Immed
msgmnb 131072 131072 Immed
msgmni 1024 1024
msgseg 32767 32767
msgtql 2050 2050
nproc 8196 8196 Immed
nstrpty 60 60
semmns 8192 8192
semmnu 8192 8192
swchunk 2560 2560

its-ps5
Tunable Value Expression Changes
max_thread_proc 1200 1200 Immed
maxfiles 4096 4096
maxfiles_lim 4096 4096 Immed
maxssiz 67108864 67108864 Immed
maxuprc 3000 3000 Immed
msgmap 2048 2048
msgmax 131072 131072 Immed
msgmnb 131072 131072 Immed
msgmni 1024 1024
msgseg 32767 32767
msgtql 2050 2050
nproc 8196 8196 Immed
nstrpty 60 60
semmns 8192 8192
semmnu 8192 8192
swchunk 2560 2560

its-ps7
Tunable Value Expression Changes
max_thread_proc 1200 1200 Immed
maxfiles 4096 4096
maxfiles_lim 4096 4096 Immed
maxssiz 67108864 67108864 Immed
maxuprc 3000 3000 Immed
msgmap 2048 2048
msgmax 131072 131072 Immed
msgmnb 131072 131072 Immed
msgmni 1024 1024
msgseg 32767 32767
msgtql 2050 2050
nproc 8196 8196 Immed
nstrpty 60 60
semmns 8192 8192
semmnu 8192 8192
swchunk 2560 2560
Adam Garsha
Valued Contributor

Re: Two app servers, one with more "system" memory utilization

Actually, 2048 seems to be the semmni default. So they are the same everywhere.

Re: Two app servers, one with more "system" memory utilization

What do you see for "swapinfo -tam" for each?
Adam Garsha
Valued Contributor

Re: Two app servers, one with more "system" memory utilization

for i in 1 3 5 7 ; do ssh its-ps${i} "hostname; swapinfo -tam"; done
its-ps1
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 40960 4160 36800 10% 0 - 1 /dev/vg00/lvol2
reserve - 36800 -36800
memory 49122 13000 36122 26%
total 90082 53960 36122 60% - 0 -
its-ps3
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 40960 995 39965 2% 0 - 1 /dev/vg00/lvol2
reserve - 32898 -32898
memory 49122 9797 39325 20%
total 90082 43690 46392 49% - 0 -
its-ps5
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 40960 84 40876 0% 0 - 1 /dev/vg00/lvol2
reserve - 40876 -40876
memory 49122 9483 39639 19%
total 90082 50443 39639 56% - 0 -
its-ps7
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 40960 0 40960 0% 0 - 1 /dev/vg00/lvol2
reserve - 40960 -40960
memory 49122 9141 39981 19%
total 90082 50101 39981 56% - 0 -
Adam Garsha
Valued Contributor

Re: Two app servers, one with more "system" memory utilization

Here is the snips from glance that triggered my question, look at its-ps1 vs. the others with regard to system memory:

Its-ps1: (CFS Master)

Total VM : 85.0gb Sys Mem : 8.8gb User Mem: 35.2gb Phys Mem: 48.0gb
Active VM: 84.6gb Buf Cache: 3.6gb Free Mem: 453mb

Its-ps3:

Total VM : 87.7gb Sys Mem : 5.4gb User Mem: 39.4gb Phys Mem: 48.0gb
Active VM: 87.1gb Buf Cache: 2.5gb Free Mem: 655mb

Its-ps5:

Total VM : 87.1gb Sys Mem : 6.1gb User Mem: 38.3gb Phys Mem: 48.0gb
Active VM: 86.1gb Buf Cache: 2.6gb Free Mem: 971mb

Its-ps7:

Total VM : 87.8gb Sys Mem : 5.3gb User Mem: 39.2gb Phys Mem: 48.0gb
Active VM: 87.2gb Buf Cache: 2.5gb Free Mem: 977mb
Bill Hassell
Honored Contributor

Re: Two app servers, one with more "system" memory utilization

You need to compare all the settings. ninode is one parameter that is very often massively oversized due to an meaningless formula. I didn't see nfile in the list either. Run kctune like this:

kctune | cut -c1-36 > /tmp/$(hostname)_kctune

on each machine. Then run diff on one compared to the machine that is 'normal'.


Bill Hassell, sysadmin
Adam Garsha
Valued Contributor

Re: Two app servers, one with more "system" memory utilization

Thanks, all the same:

# for i in 1 3 5 7 ; do ssh its-ps${i} "kctune | cut -c1-36" > /tmp/its-ps${i}_kctune; done

# wc -l /tmp/its-ps1_kctune
196 /tmp/its-ps1_kctune


# wc -l /tmp/its-ps3_kctune
196 /tmp/its-ps3_kctune

# diff /tmp/its-ps1_kctune /tmp/its-ps3_kctune

# diff /tmp/its-ps1_kctune /tmp/its-ps5_kctune

# diff /tmp/its-ps1_kctune /tmp/its-ps7_kctune

# head -5 /tmp/its-ps1_kctune
Tunable Val
NSTREVENT
NSTRPUSH
NSTRSCHED
STRCTLSZ 10
Adam Garsha
Valued Contributor

Re: Two app servers, one with more "system" memory utilization

Also, ninode looks insignificant correct?

# egrep -i ninode /tmp/its-ps1_kctune
ninode 48
vx_ninode