1849114 Members
7617 Online
104041 Solutions
New Discussion

Memory utilization...

 
SOLVED
Go to solution
deCG
Advisor

Memory utilization...

We have two systems configured as cluster systems. Memory was upgrade from 6G to 10G on both system.

Problem

Before memory upgrade, system used about 30% and 65% used by users. But after memory upgrade, system uses about 40-45% and 45-50% used by users. Application users complain that they didn't gain performance after upgrade. CPU, network and disk utilization look ok.

Where do I look for troubleshooting?
Do I need to change kernel parameters?
Please help me where to look at.
7 REPLIES 7
Geoff Wild
Honored Contributor

Re: Memory utilization...

Check out max_dbc_pct and min.

If they were not changed after adding memory - then they will consume more of the available ram.

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
James R. Ferguson
Acclaimed Contributor
Solution

Re: Memory utilization...

Hi:

What did you expect to see? Did you change any kernel parameters? Why did you add memory?

As the least, if you are running with a dynamic buffer cache, you are now using more memory for it which may or may not benefit user performance --- assuming that "they" can perceive it.

Perhaps before the upgrade you were having out-of-memory conditions for insufficient swapspace. Were you?

If this is a database server, did you tune your DBMS afterwards? Did, for instance, your DBA enlarge an Oracle SGA to use more memory for its buffers?

Regards!

...JRF...
Patrick Wallek
Honored Contributor

Re: Memory utilization...

Were you paging out? Your peak memory usage was 95%? Did it ever hit 100%?

You upgraded and are still at about 95% at peak usage?

Did you change your dbc_max_pct and/or dbc_min_pct kernel parameters? What are they set to?

If you weren't actually paging out, then I wouldn't expect users to see a whole lot of performance impact from more RAM.
deCG
Advisor

Re: Memory utilization...

Peak memory usage never reaches 100% but around 95%.

dbc_max_pct 8 - 8
dbc_min_pct 5 - 5

And this is my kernel params.
Are the params mean that always keep at least 5% memory free?
Then how can I limit system memory usage under 35%? Adjusting vx_ninode, ninode will be any different?

Thanks
James R. Ferguson
Acclaimed Contributor

Re: Memory utilization...

Hi (again):

Your dynamic buffer cahce settings mean that at most 8% and and a least 5% of your memory will be devoted to filesystem buffers. This isn't too unreasonable.

Unless you have memory pressure as processes grow their heap or a new processes are forked, etc. then your buffer cache is going to remain relatively fixed in size.

If you are not paging-out (use 'vmstat' and look at the 'po' column') then you don't have memory pressure to worry about.

By adding more memory you may have prevented insufficient memory conditions which would prevent process birth and or data stack growth in the first place. You haven't divulged what, if any, symptoms of poor performanace that you may have had.

Regards!

...JRF...
Geoff Wild
Honored Contributor

Re: Memory utilization...

dbc_max_pct of 8% means you now have a max of 819 MB of ram reserved for buffer cache - up from 491.5 MB.

Application people need to modify their apps to make use of additional ram.

How much ram is free?

Use a utility like glance or memdetail.

See my post in this thread to get a copy of memdetail:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=938769&admit=-682735245+1161639942076+28353475

Output like:

# memdetail
Memory Stat total used avail %used
physical 16128.0 15334.7 793.3 95%
active virtual 14712.8 5616.1 9096.8 38%
active real 12188.5 4332.9 7855.6 36%
memory swap 12648.1 1933.0 10715.1 15%
device swap 26528.0 14096.9 12431.1 53%


Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Bill Hassell
Honored Contributor

Re: Memory utilization...

System (kernel) memory is mostly defined by kernel parameters and ninode is usually the culprit. The formula provided by HP is simply way off. Run kmtune (kctune on 11.23) to see the value for ninode. Anything more than 2000 to 4000 is too large. I've seen the formula produce ninode values of 30k to 75k, an extremely large table size.

Additional parameters to check:

nproc nfile -- run sar -v 1 to check the current value. It's OK to have 50k file handles if you are running 5000 programs, but if sar -v shows nfile=50000 but only 800 are used, nfile is badly oversized.

Also, verify that the kernel values for nbuf and bufpages are zero. Glance will report an active value but kmtune (or kctune) must report zeros or the max&min_dbc_pct values are ignored.


Bill Hassell, sysadmin