<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Physical memory usage of kernel in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198422#M689864</link>
    <description>As Don mentioned there are a number of kernel configs that are dynamic in nature and allocated at time of boot.  Examples that comes to mind is the min_dbc_pct/max_dbc_pct another example is vx_ninode.  I am sure there are 100s of others.  &lt;BR /&gt;&lt;BR /&gt;So unless you hard code every one of these your mileage will vary.  Especially when % of 16GB or 8GB are involved.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Thu, 15 May 2008 13:29:52 GMT</pubDate>
    <dc:creator>Tim Nelson</dc:creator>
    <dc:date>2008-05-15T13:29:52Z</dc:date>
    <item>
      <title>Physical memory usage of kernel</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198418#M689860</link>
      <description>Hi there,&lt;BR /&gt;&lt;BR /&gt;I have two rx4640, with install 16GB of Physical memory. The same OS version but the physical memory usage of kernel are different via kmeminfo command.&lt;BR /&gt;&lt;BR /&gt;Server 1 : &lt;BR /&gt;=========&lt;BR /&gt;Physical memory usage summary (in page/byte/percent):&lt;BR /&gt;&lt;BR /&gt;Physical memory       =  4189222   16.0g 100%  &lt;BR /&gt;Free memory           =  1795653    6.8g  43%  &lt;BR /&gt;User processes        =  1682435    6.4g  40%  details with -user&lt;BR /&gt;System                =   695314    2.7g  17%  &lt;BR /&gt;  Kernel              =   485852    1.9g  12%  kernel text and data&lt;BR /&gt;    Dynamic Arenas    =   199482  779.2m   5%  details with -arena&lt;BR /&gt;      vx_inode_cache  =    43323  169.2m   1%  &lt;BR /&gt;      vx_global_pool  =    34866  136.2m   1%  &lt;BR /&gt;      spinlock        =    24388   95.3m   1%  &lt;BR /&gt;      vx_buffer_cache =    17888   69.9m   0%  &lt;BR /&gt;      vm_pfn2v_arena  =    16566   64.7m   0%  &lt;BR /&gt;      Other arenas    =    62451  243.9m   1%  details with -arena&lt;BR /&gt;    Super page pool   =    20010   78.2m   0%  details with -kas&lt;BR /&gt;    Static Tables     =   200994  785.1m   5%  details with -static&lt;BR /&gt;      pfdat           =    98184  383.5m   2%  &lt;BR /&gt;      nbuf            =    45824  179.0m   1%  bufcache headers&lt;BR /&gt;      vhpt            =    32768  128.0m   1%  &lt;BR /&gt;      bufhash         =     8192   32.0m   0%  bufcache hash headers&lt;BR /&gt;      text            =     7389   28.9m   0%  vmunix text section&lt;BR /&gt;      Other tables    =     8636   33.7m   0%  details with -static&lt;BR /&gt;  Buffer cache        =   209462  818.2m   5%  details with -bufcache&lt;BR /&gt;&lt;BR /&gt;Server 2:&lt;BR /&gt;=========&lt;BR /&gt;Physical memory usage summary (in page/byte/percent):&lt;BR /&gt;&lt;BR /&gt;Physical memory       =  2092068    8.0g 100%  &lt;BR /&gt;Free memory           =   923798    3.5g  44%  &lt;BR /&gt;User processes        =   779572    3.0g  37%  details with -user&lt;BR /&gt;System                =   375567    1.4g  18%  &lt;BR /&gt;  Kernel              =   270963    1.0g  13%  kernel text and data&lt;BR /&gt;    Dynamic Arenas    =    82796  323.4m   4%  details with -arena&lt;BR /&gt;      vx_global_pool  =    18910   73.9m   1%  &lt;BR /&gt;      spinlock        =    11721   45.8m   1%  &lt;BR /&gt;      vm_pfn2v_arena  =     8406   32.8m   0%  &lt;BR /&gt;      VFD_BT_NODE     =     5242   20.5m   0%  &lt;BR /&gt;      vx_inode_cache  =     4246   16.6m   0%  &lt;BR /&gt;      Other arenas    =    34271  133.9m   2%  details with -arena&lt;BR /&gt;    Super page pool   =    24050   93.9m   1%  details with -kas&lt;BR /&gt;    Static Tables     =   119341  466.2m   6%  details with -static&lt;BR /&gt;      pfdat           =    49032  191.5m   2%  &lt;BR /&gt;      nbuf            =    35184  137.4m   2%  bufcache headers&lt;BR /&gt;      vhpt            =    16384   64.0m   1%  &lt;BR /&gt;      text            =     7544   29.5m   0%  vmunix text section&lt;BR /&gt;      bufhash         =     4096   16.0m   0%  bufcache hash headers&lt;BR /&gt;      Other tables    =     7100   27.7m   0%  details with -static&lt;BR /&gt;  Buffer cache        =   104604  408.6m   5%  details with -bufcache&lt;BR /&gt;&lt;BR /&gt;How come the kernel text and data in server 1 greater than server 2?&lt;BR /&gt;&lt;BR /&gt;I hope that you can help me.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Thu, 15 May 2008 11:22:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198418#M689860</guid>
      <dc:creator>Achilles_2</dc:creator>
      <dc:date>2008-05-15T11:22:20Z</dc:date>
    </item>
    <item>
      <title>Re: Physical memory usage of kernel</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198419#M689861</link>
      <description>sorry, server 2 has 8GB and server 1 has 16GB</description>
      <pubDate>Thu, 15 May 2008 11:28:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198419#M689861</guid>
      <dc:creator>Achilles_2</dc:creator>
      <dc:date>2008-05-15T11:28:33Z</dc:date>
    </item>
    <item>
      <title>Re: Physical memory usage of kernel</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198420#M689862</link>
      <description>HI:&lt;BR /&gt;&lt;BR /&gt;Fisrt, it would seem that server-1 has 16GB physical memory whereas server-2 has 8GB.&lt;BR /&gt;&lt;BR /&gt;Regardless, server-1 shows about 12% kernel text and data whereas server-2 shows 13%.&lt;BR /&gt;&lt;BR /&gt;Do you really consider that a difference?  I certainly don't.  I presume too, that these servers were booted at different times and have undergone different work loads since.  You aren't comparing fairly from that standpoint, then, either.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Thu, 15 May 2008 11:33:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198420#M689862</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2008-05-15T11:33:27Z</dc:date>
    </item>
    <item>
      <title>Re: Physical memory usage of kernel</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198421#M689863</link>
      <description>Text? All I can think is a different driver module on one vs. another. The tiny difference you're talking about would only entice me to track it down if I truly had nothing better to do with my life.&lt;BR /&gt;&lt;BR /&gt;Data -- Well, first there's the data that is memory descriptor (pfn-to-virt entries, physical frame descriptors (pfdats)). Those are allocated per memory page... so more memory pages, more of them. (Hence they stay a flat percentage cost but you're trying to look at the raw costs.)&lt;BR /&gt;&lt;BR /&gt;Other than that -- the rest is dynamic data differences. Dynamic being dynamic... it would be a function of the load. I really don't see anything odd here.</description>
      <pubDate>Thu, 15 May 2008 12:52:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198421#M689863</guid>
      <dc:creator>Don Morris_1</dc:creator>
      <dc:date>2008-05-15T12:52:45Z</dc:date>
    </item>
    <item>
      <title>Re: Physical memory usage of kernel</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198422#M689864</link>
      <description>As Don mentioned there are a number of kernel configs that are dynamic in nature and allocated at time of boot.  Examples that comes to mind is the min_dbc_pct/max_dbc_pct another example is vx_ninode.  I am sure there are 100s of others.  &lt;BR /&gt;&lt;BR /&gt;So unless you hard code every one of these your mileage will vary.  Especially when % of 16GB or 8GB are involved.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 15 May 2008 13:29:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/physical-memory-usage-of-kernel/m-p/4198422#M689864</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2008-05-15T13:29:52Z</dc:date>
    </item>
  </channel>
</rss>

