General
cancel
Showing results for 
Search instead for 
Did you mean: 

kmeminfo -arena output understanding

CDR Project
Occasional Visitor

kmeminfo -arena output understanding

Hi gurus.I am trying to understand kmeminfo -arena output to analyse system memory used by on HPUX 11.23 BOX. But all in vain. Can some please guide me on how to do so.
8 REPLIES
Don Morris_1
Honored Contributor

Re: kmeminfo -arena output understanding

That's a rather open-ended question. Are you asking to be guided on solely understanding the output (i.e. is there a field or value confusing you), or in the wider scope of kernel memory analysis (which could easily be a week-long course or more).

Assuming the former, I'll try to give at least a quick version of the basics. Hopefully you'll reply as needed for clarification.

So -- the short version of the arena design on 11.23:

An arena handle represents a grouped set of per-processor caches of kernel memory. The arena can be created to always allocate objects of the same size (a "fixed" arena since the allocation size is fixed) or to allocate objects of any size (a "variable" sized arena).

For a fixed arena, there is one cache for each processor on the system. This reduces contention if multiple processors (threads) are allocating and/or freeing memory to the same arena at the same time. There is no sharing between the per-processor caches. So one obvious thing to consider if you're looking at kmeminfo output is if your workload is generating more memory allocations than you expect, it may be because the memory is allocated/freed on one processor... and then the workload shifts to a different processor and repeats. The first processor's cache would eventually drain if it isn't needed, but since the system doesn't know how many threads will be allocating to do the work - it isn't 'shifted' to the new allocation processor at the time of allocation.

Instead, these caches are drained by a Garbage Collection thread (vhand on 11.23). A cache is garbage collected at a per-page level, but organized on an object level. To be released from the cache, all objects within a page must be freed to the Cache (a multi-page object trivially fulfills this condition since it is either all free or in use). This is one of the reasons you may have many free objects in a cache but be unable to reclaim them -- there is no guarantee that the arena client will free all objects for a given page back.

All this applies to the Variable arenas as well -- with the addition that the per-processor caches are extended to be caches for a set of potential sizes with a per-processor cache set for each potential size. (i.e. instead of 4 caches for 4 processors, if you have 8 potential sizes you have 8 sets of 4 caches for 32 total caches). The variable arenas work by rounding up a requested size to the next higher discrete cache size (so a 50 byte allocation may round to 120 bytes but a 121 byte allocation would round to 248 bytes, etc.). The object sizes so chosen are based on the processor data cache line size for sizes less than a page, or a multiple of pages for multi-page allocations.

A special case allocation size is that allocations above a certain size (I believe 32kb on 11.23) are handled in this manner, but never truly cached -- instead they are freed directly to the system when they are freed to the arena. These are referred to as "Extra Large objects" (and the freelist may be marked as "xl" in size depending on your kmeminfo version.

11.23 also was the first phase of ccNUMA support, so kmeminfo will report whether the
freelists are directed at Interleaved Memory or to a particular locality. 11.23 does not support a mix of the two in the same arena (11.31 does).

So -- with the caveat that this is probably a newer version of kmeminfo (10.14) and an 11.31 system (I don't have an 11.23 laying around at the moment), let's walk through some sample output:

Fixed arena "reg_fixed_arena" owns 11421 pages (44.6mb)
Arena-wide Caches ILV: 0 pages (0.0b) CF: 0 pages (0.0b)
Regular (Interleaved First) free lists account for 11421 pages (44.6mb):
Free objects represent 33.9mb (76%) of all memory of this type in this arena.
idx objsz pages bytes % nobjs used free %
0 1032 11421 44.6m 100 34263 8262 26001 76

Ok... so this is a Fixed arena (all allocations are the same size) called "reg_fixed_arena". It currently is using 11421 pages of memory, whether that is out in use or held in the caches.
The arena/caches will attempt to use Interleave by preference.

76% of the memory for the arena is currently held in the caches. [The sum of the Free objects].

There is a single object size index (0).

Each allocation from this arena is for 1032 bytes.

There are 34263 total objects (the remaining space in the pages is due to padding to the cache line size and/or arena metadata to manage the objects), of which 8,262 are in use.

If you do a `kmeminfo -V -arena "reg_fixed_arena" you can get more details such as the attributes of the arena (flags/special memory types requested when the arena was created) and the per-processor breakdown of objects/usage:
Fixed arena "reg_fixed_arena" owns 11421 pages (44.6mb)
Arena-wide Caches ILV: 0 pages (0.0b) CF: 0 pages (0.0b)
kmem_arena_t 0xe00000010c418280
Attributes: KMT_DEFAULT KT_DEFAULT|KT_NUMA_ILV_DEFAULT|KT_NUMA_NO_DEFAULT KAS_ALIVE
Regular (Interleaved First) free lists account for 11421 pages (44.6mb):
Free objects represent 33.9mb (76%) of all memory of this type in this arena.
Per index summary (all cpu's):
idx objsz pages bytes % nobjs used free %
0 1032 11421 44.6m 100 34263 8243 26020 76
Per cpu free list details:
idx cpu kmem_flist_hdr_t pages bytes % nobjs used free %
0 0 0xe00000012202da00 339 1.3m 3 1017 813 204 20
0 1 0xe0000001630b1980 147 588.0k 1 441 371 70 16
0 2 0xe000000163023e80 5921 23.1m 52 17763 1252 16511 93
0 3 0xe0000001630f1580 198 792.0k 2 594 494 100 17
0 4 0xe000000163264100 890 3.5m 8 2670 1719 951 36
0 5 0xe00000015b180880 2680 10.5m 23 8040 2497 5543 69
0 6 0xe00000015b111780 885 3.5m 8 2655 593 2062 78
0 7 0xe00000015b131580 361 1.4m 3 1083 504 579 53

Here the cpu number is the index into the cache for the processor (usually the same as the index for top, etc... but I wouldn't expect that will always hold - this is implementation dependent). The kmem_flist_hdr_t is the pointer to the particular cache header being reported, the rest of the fields are the same as the summary for this allocation size in general.

For a variable arena, the same general guidelines hold:

Variable arena "misc region are" owns 16755 pages (65.4mb)
Arena-wide Caches ILV: 0 pages (0.0b) CF: 0 pages (0.0b)
Regular (Interleaved First) free lists account for 16755 pages (65.4mb):
Free objects represent 38.7mb (59%) of all memory of this type in this arena.
idx objsz pages bytes % nobjs used free %
0 120 5021 19.6m 30 155651 39815 115836 74
1 248 10468 40.9m 62 157020 68527 88493 56
4 632 1261 4.9m 8 7566 5594 1972 26
19 20480 5 20.0k 0 1 1 0 0

Here there are multiple indices at the Arena level based on the index in the set of potential sizes and the data is reported on a per-size level (as well as the arena-wide totals). Using verbose here will give per-processor cache breakdowns for each size index. Sizes which have no memory ever assigned to them may not be reported (up to kmeminfo how it chooses to express that, but I believe that's the common case for brevity).

Does this get you started?
CDR Project
Occasional Visitor

Re: kmeminfo -arena output understanding

Thank you very much Morris for information shared in your post. I have a 11.23 server in which there are some parameters in kmeminfo output which are bothering me.

They are vx_global_kmcac, spinlock, vx_buffer_kmcac. Can you please take some time and trouble to put some light on these parameters which can help me in further analysing the problem.


Thanks & Regards
Pradeep.
Don Morris_1
Honored Contributor

Re: kmeminfo -arena output understanding

Ok -- so you've got three arenas that are bothering you (they aren't "parameters" they're arenas on the system that get displayed in the output).

The spinlock arena is (logically enough) the arena which provides spinlocks to the rest of the system. [http://en.wikipedia.org/wiki/Spinlock if you need more information on what a spinlock is/synchronization primitives].

vx_global_kmcac[he] and vx_buffer_kmcac[he] are VxFS arenas. The first looks to be a global/misc arena [general allocations in VxFS that have no specific arena], the second looks to be for buffer metadata to manage buffer I/O.

There isn't much more that can be said unless you give some idea of why they are bothering you in the output. (Certainly they should be in the output somewhere. And whether they're top, middle or bottom rather depends on what the system is doing.)

I'd like to see specifically:

The global summary from the top of kmeminfo output (overall system state).
The per-arena output for these three arenas.
A statement of the machine workload state (is it idle? Are you running a FS intensive load? A non-FS load?) and what your concerns are.

Otherwise, I feel there's not much to go on here.
CDR Project
Occasional Visitor

Re: kmeminfo -arena output understanding

I am attaching kmeminfo output along with arena details for those three arenas asked in your previous post.

The work load on this server is not much io intensive. I have another server running same application with same amount of hardware resources. All the kernel parameters in these two servers are also tuned to same values.

But still the system memory utilization in the server from which kmeminfo output is attached with this post is more compared to other server. All i see from kmeminfo output is that the three arenas are using more amount of memory in the problem server compared to the other one.

Are there any kernel parameters which control these three arena memory utilization.

Thanks & Regards
Pradeep
Andy Jimeno
Occasional Visitor

Re: kmeminfo -arena output understanding

hi folks,

i am wondering where could i donwload the recent version of kmeminfo for 11i v3.

thanks a lot.

CDR Project
Occasional Visitor

Re: kmeminfo -arena output understanding

Hi Andy,

I hope you need to contact your HP support representative for the latest version of kmeminfo for HPUX 11.31..

regards
Pradeep.
Laurent Menase
Honored Contributor

Re: kmeminfo -arena output understanding

Hi Andy,

kmeminfo is a hp confidential tool which should only be used on hp request.

Contact your support represantative to get it and interpret the result.
masthan_1
Advisor

Re: kmeminfo -arena output understanding

Hi to All,

 

We also facing this same issue as reported by Pradeep. In our server we face issue with respect to "low free memory" error and this trigger through the monitoring tool (BMC). when we contacted the HP support, we advised to tune the following parameters (as mentioned by pradeep).

 

1)  vx_buffer_kmcac

2)  vx_global_kmcac

3)   vx_rwsleeplock

we were also advised to tune these arenas after modifying the parameters "vx_ninode    0  to 64000"
"vxfs_ifree_timelag    0  to -1  "

 

we have slight fear as this is on a production server and we frequently getting the low free memory error alert. we just want to get rid of this alert. hence requesting your expert advice and ideas.

 

Regards,

MAsthan