cancel
Showing results for 
Search instead for 
Did you mean: 

KMEM_ALLOC

ric techow
Occasional Contributor

KMEM_ALLOC

How do you identify what might be causing high memory usage from a kmeminfo -a? I need to know what usage IDX 5 below represents.

Is there a method for identifying the system call associated with this?


Variable arena "KMEM_ALLOC" owns 1948932 pages (7.4gb):
idx objsz pages bytes % nobjs
0 24 12 48.0k 0 1512
1 56 12 48.0k 0 756
2 120 741 2.9m 0 22971
3 184 12 48.0k 0 252
4 248 12 48.0k 0 180
5 312 1938286 7.4g 99 23259432
6 376 22 88.0k 0 220
7 440 46 184.0k 0 414
2 REPLIES
singh sanjeev
Trusted Contributor

Re: KMEM_ALLOC

what is the output of kmeminfo

#kmeminfo

please check the output of kmeminfo
Sanjeev Singh
Don Morris_1
Honored Contributor

Re: KMEM_ALLOC

KMEM_ALLOC is the arena provided by the kernel virtual memory subsystem to underlie calls to the kmem_alloc() / kmem_free() interfaces. [Said interfaces are not officially deprecated in the v3 Device Driver Reference, but should be].

As such, other than knowing that this is for an allocation using kmem_alloc() for more than 249 bytes and less than 312 bytes, little more can be gleaned from just this bit of data. There is no single or particular system call affiliated with this arena type.

If you suspect you have a memory leak in a client of this arena (which is what I read your inquiry to mean), the first question is what release you're running, the second would be the output of `kmeminfo -a KMEM_ALLOC` for several invocations (preferably with a reasonable amount of time between them). What needs to be checked is that: 1) The number of used pages consistently grows. 2) This is because the number of used objects is growing, not just because a workload which needs a large number of these keeps shifting cpus (in which case, the free object count on other cpus should diminish as the used increases). 3) Overall memory pressure of the system if lots of these are free.

If it is a leak, the next step would be to contact support and get the vmtrace utility. Depending on how the leak occurs (if it is regular and steady, vmtrace of the running system is fine -- otherwise, it would likely have to be done on reboot to catch this), the vmtrace Leak functionality applied to this arena will result in a log retrievable using `kmeminfo -vmtrace` which should show the allocation paths with no corresponding free. At that point, checking against the patch level of your machine and existing patches for one which addresses your issue becomes more feasible -- or finding the appropriate owner of the issue if there is no fix already delivered.