cancel
Showing results for 
Search instead for 
Did you mean: 

Memory crunch 11iv3

 
chindi
Respected Contributor

Memory crunch 11iv3

Hi Team ,

 

root #/tmp >./kmeminfo
tool: kmeminfo 8.00 - libp4 9.306 - HP CONFIDENTIAL
unix: /stand/current/vmunix 11.31 64bit IA64 on host "dcinfdb1"
core: /dev/kmem live
link: Fri Nov 07 17:08:54 IST 2014
boot: Mon Nov 17 13:07:34 2014
time: Tue Feb 10 11:40:13 2015
nbpg: 4096 bytes


----------------------------------------------------------------------
Physical memory usage summary (in page/byte/percent):

Physical memory = 4188971 16.0g 100%
Free memory = 200863 784.6m 5%
User processes = 0 0.0 0% details with -user
System = 194 776.0k 0%
Kernel = 0 0.0 0% kernel text and data
Dynamic Arenas = 775991 3.0g 19% details with -arena
UFS_MISC_ARENA = 371722 1.4g 9%
vx_global_kmcac = 58284 227.7m 1%
spinlock_arena = 56498 220.7m 1%
misc region are = 31113 121.5m 1%
reg_fixed_arena = 26384 103.1m 1%
Other arenas = 231990 906.2m 6% details with -arena
Super page pool = 542713 2.1g 13% details with -kas
Static Tables = 629675 2.4g 15% details with -static
inode = 371720 1.4g 9%
pfdat = 204539 799.0m 5%
vhpt = 32768 128.0m 1%
text = 9370 36.6m 0% vmunix text section
bss = 6807 26.6m 0% vmunix bss section
Other tables = 4470 17.5m 0% details with -static
Buffer cache = 92 368.0k 0% details with -bufcache
UFC meta mrg = 102 408.0k 0%
UFC file mrg = 47015 183.7m 1%

 

root #/tmp >sw
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 8192 4129 4063 50% 0 - 1 /dev/vg00/lvol2
dev 16256 409 15847 3% 0 - 1 /dev/vg_swap_space/swap1
reserve - 13538 -13538
memory 15563 6179 9384 40%
total 40011 24255 15756 61% - 0 -

 

Kindly let me know any kernel parametr that can be tuned to bring super page pool , static tables  % down ?

Its a DB server with 6 oracle 11g instances with SGA+PGA 1 GB each .

Box is of 4cores / 16GB RAM , rx2660

7 REPLIES
Bill Hassell
Honored Contributor

Re: Memory crunch 11iv3

The only way to reduce the size of the kernel areas is to reduce the number of programs that are using massive amounts of memory. With the primary swap usage = 50% (4GB), it would seem that the system has run out of memory (a lot!) and has moved lots of pages out of memory to the swap area. This page swap has a very detrimental effect of performance. Use vmstat to see how bad the situation is:

 

# vmstat -s | grep paged
321954 pages paged in
1308 pages paged out

 In this example, a *LOT* of page swapping is taking place. The paged in value is not important as it records program starts as page in events. But the 1308 pages paged out may be very significant if this is constantly increasing. The numbers are cumulative since the last reboot or the last time you ran vmstat -z.

I would zero the statistics (vmstst -z) and then monitor. If page out numbers start increasing, you need more RAM, probably a lot more (another 16 GB). This would allow the 6 Oracle instances to also adjust their SGA pools for better performance.

 



Bill Hassell, sysadmin
chindi
Respected Contributor

Re: Memory crunch 11iv3

Hi Bill,

 

Any particular PHKL for the same ?

Re: Memory crunch 11iv3


root #/tmp >./kmeminfo
tool: kmeminfo 8.00 - libp4 9.306 - HP CONFIDENTIAL
nbpg: 4096 bytes


Dynamic Arenas = 775991 3.0g 19% details with -arena
UFS_MISC_ARENA = 371722 1.4g 9%
vx_global_kmcac = 58284 227.7m 1%
spinlock_arena = 56498 220.7m 1%
misc region are = 31113 121.5m 1%
reg_fixed_arena = 26384 103.1m 1%
Other arenas = 231990 906.2m 6% details with -arena
Super page pool = 542713 2.1g 13% details with -kas
Static Tables = 629675 2.4g 15% details with -static
inode = 371720 1.4g 9%
pfdat = 204539 799.0m 5%


 

These seem far too high for normal systems.   Have you specifically tuned the tunable "ninode" to a high value ? If so, any particular reason ?  Do you have a lot of file-systems/files hosted on UFS file-systems ?
The output of kctune |grep inode would be helpful.

-santosh
chindi
Respected Contributor

Re: Memory crunch 11iv3

It was an ignite imgae taken from 256GB RAM box and put in 16GB box .

Maybe the reason.

Can i reduce it online ?

 

root #/ >kctune |grep inode
ninode 1699292 1699292
vx_ninode 0 Default Immed

Re: Memory crunch 11iv3

>Can I reduce it online?

>ninode 1699292 1699292

 

ninode(5) says it requires a reboot.

Re: Memory crunch 11iv3

This value is absurdly high -- 1699292 !  You didn't reply about whether you were actually using UFS systems.  My guess is you are not -- the only recommended use case for the UFS FS should be /stand.  

I would recommend that you revert the setting of ninode back to the default and let the system figure out the best value.  This should help you get back at least 2 GB of memory.   As Dennis mentioned you would need to reboot the system.

Since you will have to reboot in any case, I would further recommend that try a base_pagesize setting of 8K.  That should get you another ~0.5GB worth of memory back, and with a little bit of luck performance should be a tad better as well.

 

It's a bit puzzling  why the system took this high value of ninode -- the value is "autotuned" based on system memory.  So your ignite server had 256GB and this value is presumably what is computed on that system.  It should have ratcheted that value down on the smaller system -- but perhaps that intelligence is missing in Ignite.   I'll check with the ignite folks -- it appears that this may be a bug.

 

-santosh.

chindi
Respected Contributor

Re: Memory crunch 11iv3

Hi Santosh ,

 

We are not using UFS filesystems.