<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Growing process memory in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931071#M610380</link>
    <description>How about share memory area? &lt;BR /&gt;ipcs -mob to check if there are any segment with zero process attached.</description>
    <pubDate>Wed, 31 Jan 2007 12:46:15 GMT</pubDate>
    <dc:creator>John Guster</dc:creator>
    <dc:date>2007-01-31T12:46:15Z</dc:date>
    <item>
      <title>Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931056#M610365</link>
      <description>I've two system for a test.&lt;BR /&gt;In the first, the memory of a process growing (at least 400Mb and after the kernel kill it). In the second with the same process, the memory is normal (24 Mb).&lt;BR /&gt;&lt;BR /&gt;(uname -a)&lt;BR /&gt;HP-UX first B.11.11 U 9000/800 1106414681 unlimited-user license&lt;BR /&gt;&lt;BR /&gt;HP-UX second B.11.11 U 9000/800 3678999180 unlimited-user license&lt;BR /&gt;&lt;BR /&gt;The process use the STL routine (map, list, ...), and TCSI Solution Core OSP framework.&lt;BR /&gt;&lt;BR /&gt;Question:&lt;BR /&gt;1 - There are particular cargo conditions, that they produce results of this type? &lt;BR /&gt;&lt;BR /&gt;2 - It is possible that the memory in a system comes freed automatically, and in the other not?&lt;BR /&gt;&lt;BR /&gt;3 - How I can control and verify these anomalies?&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;Benedetto.&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Jan 2007 05:12:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931056#M610365</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-23T05:12:01Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931057#M610366</link>
      <description>Benedetto,&lt;BR /&gt;if you are running the same executable on two machines and the behaviour is different, you must have a different environment/libraries.&lt;BR /&gt;&lt;BR /&gt;Have you tried re-compiling the code into a standalone module (no calls to external libs) and re-running it?&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Jan 2007 05:16:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931057#M610366</guid>
      <dc:creator>Peter Godron</dc:creator>
      <dc:date>2007-01-23T05:16:28Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931058#M610367</link>
      <description>I cannot compile the process with the method stand-alone because part of a package larger that contains other processes that link the shared libraries.&lt;BR /&gt;&lt;BR /&gt;I would want to try to understand what happens...&lt;BR /&gt;&lt;BR /&gt;I have need of some suggestions in order to analyze the problem and to try to find the cause.</description>
      <pubDate>Tue, 23 Jan 2007 05:26:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931058#M610367</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-23T05:26:43Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931059#M610368</link>
      <description>You should use gdb's leak detection commands to see where it is leaking/growing.  You can download gdb for free.</description>
      <pubDate>Tue, 23 Jan 2007 06:06:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931059#M610368</guid>
      <dc:creator>Dennis Handly</dc:creator>
      <dc:date>2007-01-23T06:06:10Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931060#M610369</link>
      <description>Hi,&lt;BR /&gt;I have tried to find through wdb the memory leaks, but I have not found any. There is virtual memory busy. I use the STL very much, like map very, vector, list... etc.&lt;BR /&gt;&lt;BR /&gt;I think that the problem comes from the the free or delete operation when it comes freed the memory used from the STL object.&lt;BR /&gt;&lt;BR /&gt;(In the wdb memory use window, there are many new [] operations...)&lt;BR /&gt;&lt;BR /&gt;I go for the just road?&lt;BR /&gt;Thanks.</description>
      <pubDate>Tue, 30 Jan 2007 06:38:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931060#M610369</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-30T06:38:58Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931061#M610370</link>
      <description>&amp;gt;I have tried to find through wdb the memory leaks, but I have not found any.&lt;BR /&gt;&lt;BR /&gt;If no leaks, can you compare the systems by using "info heap" on both.&lt;BR /&gt;&lt;BR /&gt;There should be a big difference if one is 400 Mb and the other 24 Mb.  Do both processes have the same inputs?</description>
      <pubDate>Tue, 30 Jan 2007 06:52:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931061#M610370</guid>
      <dc:creator>Dennis Handly</dc:creator>
      <dc:date>2007-01-30T06:52:38Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931062#M610371</link>
      <description>The same input for the processes.&lt;BR /&gt;&lt;BR /&gt;In the thread &lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1076848," target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1076848,&lt;/A&gt; Don Morris talk about similar problem.&lt;BR /&gt;&lt;BR /&gt;Insert the swapinfo of two nodes:&lt;BR /&gt;&lt;BR /&gt;FIRST &amp;gt; swapinfo -atm&lt;BR /&gt;             Mb      Mb      Mb   PCT  START/      Mb&lt;BR /&gt;TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME&lt;BR /&gt;dev        4096    2640    1456   64%       0       -    1  /dev/vg00/lvol2&lt;BR /&gt;dev        1600       0    1600    0%       0       -    2  /dev/vg00/lvol_swap&lt;BR /&gt;dev        8000    2635    5365   33%       0       -    1  /dev/vg_swap/lvol_swap2&lt;BR /&gt;reserve       -    8205   -8205&lt;BR /&gt;memory     6286    4217    2069   67%&lt;BR /&gt;total     19982   17697    2285   89%       -       0    -&lt;BR /&gt;&lt;BR /&gt;SECOND &amp;gt; swapinfo -atm&lt;BR /&gt;             Mb      Mb      Mb   PCT  START/      Mb&lt;BR /&gt;TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME&lt;BR /&gt;dev        4096    2751    1345   67%       0       -    1  /dev/vg00/lvol2&lt;BR /&gt;dev        8192    2752    5440   34%       0       -    1  /dev/vg00/lvol_swap&lt;BR /&gt;reserve       -    6785   -6785&lt;BR /&gt;memory     6282    5585     697   89%&lt;BR /&gt;total     18570   17873     697   96%       -       0    -</description>
      <pubDate>Tue, 30 Jan 2007 08:58:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931062#M610371</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-30T08:58:27Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931063#M610372</link>
      <description>&amp;gt;The same input for the processes.&lt;BR /&gt;&lt;BR /&gt;Do you have the same patches on each?&lt;BR /&gt;And what does gdb's heap commands say about each?&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Don Morris talk about similar problem.&lt;BR /&gt;&lt;BR /&gt;Yes, but there never was a resolution.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Insert the swapinfo of two nodes:&lt;BR /&gt;&lt;BR /&gt;This just shows more VM total on FIRST.</description>
      <pubDate>Tue, 30 Jan 2007 09:08:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931063#M610372</guid>
      <dc:creator>Dennis Handly</dc:creator>
      <dc:date>2007-01-30T09:08:59Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931064#M610373</link>
      <description>To the verification through the gdb (wdb), info heap it gives back to me "Heap analysis is not enabled now."&lt;BR /&gt;&lt;BR /&gt;To enable?</description>
      <pubDate>Tue, 30 Jan 2007 09:29:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931064#M610373</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-30T09:29:28Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931065#M610374</link>
      <description>Extract of info heap:&lt;BR /&gt;&lt;BR /&gt;No.   Total bytes     Blocks     Address     Function&lt;BR /&gt;0        4055040        30      0x62867000   __alloc_stack()&lt;BR /&gt;1        2547160        28945   0x40f49700   __libc_mutex_alloc()&lt;BR /&gt;2        911168         1165    0x40ba2cf0   operator new(unsigned long)()&lt;BR /&gt;3        370494         1       0x407050f0   sktsfMalloc()&lt;BR /&gt;4        340105         12940   0x4112d5c8   operator new(unsigned long)()&lt;BR /&gt;5        281764         6       0x4076afa8   lxldalc()&lt;BR /&gt;6        262496         1       0x408094a0   nsgbliuc()&lt;BR /&gt;7        257294         9891    0x40f4f828   operator new(unsigned long)()&lt;BR /&gt;8        230296         2617    0x406891b0   __libc_mutex_alloc()&lt;BR /&gt;9        157144         4936    0x40b39ad8   operator new(unsigned long)()&lt;BR /&gt;10       145768         6       0x40f60b60   operator new(unsigned long)()&lt;BR /&gt;11       139264         1       0x4087d2c8   slwmmgetmem()&lt;BR /&gt;12       94080          84      0x41332130   operator new(unsigned long)()&lt;BR /&gt;13       69632          1       0x6297d000   __alloc_stack()&lt;BR /&gt;14       65620          1       0x407f9438   nsgbliuc()&lt;BR /&gt;15       65620          1       0x407e93d0   nsgbliuc()&lt;BR /&gt;16       65620          1       0x407d9368   nsgbliuc()&lt;BR /&gt;17       65620          1       0x407c9300   nsgbliuc()&lt;BR /&gt;18       65608          11      0x409349b0   operator new [](unsigned long)()&lt;BR /&gt;19       65536          1       0x40af6c10   operator new [](unsigned long)()&lt;BR /&gt;20       65288          9       0x4077f818   sktsfMalloc()&lt;BR /&gt;21       65000          1       0x40558520   operator new [](unsigned long)()&lt;BR /&gt;22       57344          1       0x40849610   nsgbliuc()&lt;BR /&gt;23       49468          9       0x40c1c128   operator new(unsigned long)()&lt;BR /&gt;24       30561          1165    0x412bf550   operator new(unsigned long)()&lt;BR /&gt;25       28160          55      0x4054e7d8   operator new [](unsigned long)()&lt;BR /&gt;26       24064          47      0x40933dd8   operator new [](unsigned long)()&lt;BR /&gt;27       20340          1       0x4075f840   lxldalc()&lt;BR /&gt;28       20000          1       0x40548d48   operator new [](unsigned long)()&lt;BR /&gt;29       18944          8       0x409c8e60   __nss_XbyY_buf_alloc()&lt;BR /&gt;30       17896          1       0x40765030   sktsfMalloc()&lt;BR /&gt;31       16844          5       0x40c271f0   operator new(unsigned long)()&lt;BR /&gt;32       16416          2       0x406fcef8   _findbuf()&lt;BR /&gt;33       16384          8       0x4097c300   memset()&lt;BR /&gt;34       16280          10      0x40e14a70   operator new(unsigned long)()&lt;BR /&gt;35       9320           1165    0x40962d80   operator new(unsigned long)()&lt;BR /&gt;36       9272           61      0x40782a90   operator new(unsigned long)()&lt;BR /&gt;37       9036           753     0x406e7018   operator new(unsigned long)()&lt;BR /&gt;38       8772           731     0x406db368   operator new(unsigned long)()&lt;BR /&gt;39       8704           17      0x40592f68   operator new [](unsigned long)()&lt;BR /&gt;40       8240           2       0x4076bb58   sktsfMalloc()&lt;BR /&gt;41       8060           31      0x405536a0   operator new(unsigned long)()&lt;BR /&gt;42       7728           1       0x408608b8   nsgbliuc()&lt;BR /&gt;43       7080           30      0x40575b30   __pthread_id_lookup()&lt;BR /&gt;44       6720           5       0x40942d38   operator new(unsigned long)()&lt;BR /&gt;45       6144           12      0x4054f458   operator new [](unsigned long)()&lt;BR /&gt;46       5760           30      0x406f7a50   __private_data_setup()&lt;BR /&gt;47       5632           1       0x4053c640   __libCsup_mutex_init()&lt;BR /&gt;48       4480           1       0x406fef18   localtime_r()&lt;BR /&gt;49       4448           1       0x4085b368   nsgbliuc()&lt;BR /&gt;50       4264           1       0x408741d8   slwmmgetmem()&lt;BR /&gt;51       4200           1       0x407bf8b0   nngsini_init_streams()&lt;BR /&gt;52       4187           295     0x40580470   operator new [](unsigned long)()&lt;BR /&gt;53       4136           1       0x4077c740   sktsfMalloc()&lt;BR /&gt;54       4096           2       0x408626f8   nlhtnsl()&lt;BR /&gt;55       4096           2       0x40862f08   nlhtnsl()&lt;BR /&gt;56       3920           35      0x4085dd30   operator new(unsigned long)()&lt;BR /&gt;57       3584           32      0x4085e530   operator new(unsigned long)()&lt;BR /&gt;58       3540           295     0x4057dea8   operator new(unsigned long)()&lt;BR /&gt;59       3468           1       0x40875290   lpmrist()&lt;BR /&gt;60       3360           35      0x406f4ac0   operator new(unsigned long)()&lt;BR /&gt;61       3300           55      0x40546928   operator new(unsigned long)()&lt;BR /&gt;62       3072           1       0x40c1b0b0   operator new(unsigned long)()&lt;BR /&gt;63       2664           14      0x40555bd0   operator new [](unsigned long)()&lt;BR /&gt;64       2536           159     0x406e8c08   operator new [](unsigned long)()&lt;BR /&gt;65       2440           2       0x40c23ed0   operator new(unsigned long)()&lt;BR /&gt;66       2368           1       0x408646c8   __nss_XbyY_buf_alloc()&lt;BR /&gt;67       2268           1       0x40865018   nsmal()&lt;BR /&gt;68       2208           1       0x4076b2a8   kouodalc()&lt;BR /&gt;69       2184           31      0x406d9058   operator new [](unsigned long)()&lt;BR /&gt;70       2160           90      0x40864328   operator new [](unsigned long)()&lt;BR /&gt;71       2136           1       0x407647c8   lxldalc()&lt;BR /&gt;72       2104           1       0x408ab6e0   __nss_XbyY_buf_alloc()&lt;BR /&gt;73       2070           1       0x408aa690   nsbGetBFS()&lt;BR /&gt;74       2070           1       0x408a8c08   nsbGetBFS()&lt;BR /&gt;75       2070           1       0x408aaeb8   nsbGetBFS()&lt;BR /&gt;76       2070           1       0x408a9e68   nsbGetBFS()&lt;BR /&gt;77       2070           1       0x408a9430   nsbGetBFS()&lt;BR /&gt;78       2068           11      0x409087a0   operator new(unsigned long)()&lt;BR /&gt;79       2048           1       0x40b08c60   operator new [](unsigned long)()&lt;BR /&gt;80       2048           1       0x40b09470   operator new [](unsigned long)()&lt;BR /&gt;81       2048           1       0x40b08450   operator new [](unsigned long)()&lt;BR /&gt;82       2048           1       0x40b07c40   operator new [](unsigned long)()&lt;BR /&gt;83       2048           1       0x40b07430   operator new [](unsigned long)()&lt;BR /&gt;84       2048           1       0x40ad09d0   operator new [](unsigned long)()&lt;BR /&gt;&lt;BR /&gt;....</description>
      <pubDate>Tue, 30 Jan 2007 09:52:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931065#M610374</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-30T09:52:34Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931066#M610375</link>
      <description>The simple answer is that your RAM is far too small (in both machines) for the processes that you are running. Once all the processes use up available RAM, processes will be deactivated, then paged out to swap. When those processes are needed, other processes must be deactivated and paged out. By doubling your current RAM, you can avoid this massive impact on performance. vmstat will help in showing the memory pressure -- look at the po column. 2 digit (or larger) numbers indicate significant lack of RAM. Use this command to sort the top 10 processes by local (heap) memory usage:&lt;BR /&gt; &lt;BR /&gt;UNIX95=1 ps -e -o vsz,ruser,pid,args | sort -rn | head -10&lt;BR /&gt; &lt;BR /&gt;(the UNIX95 variable is require to activate the -o options -- see man ps). The ideal performance for a busy system is virtually no swap space used to pageouts. Some swap space is used for memory mapped files which is normal.</description>
      <pubDate>Tue, 30 Jan 2007 10:45:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931066#M610375</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2007-01-30T10:45:47Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931067#M610376</link>
      <description>vmstat of the first node:&lt;BR /&gt;&lt;BR /&gt;vmstat&lt;BR /&gt;         procs           memory                   page                              faults       cpu&lt;BR /&gt;    r     b     w      avm    free   re   at    pi   po    fr   de    sr     in     sy    cs  us sy id&lt;BR /&gt;    6     1     0  2628838   69990  471   91    17    1     0    0    17    835  45843 11734  33 13 53&lt;BR /&gt;&lt;BR /&gt;po = 1&lt;BR /&gt;&lt;BR /&gt;and second node:&lt;BR /&gt;&lt;BR /&gt;vmstat&lt;BR /&gt;         procs           memory                   page                              faults       cpu&lt;BR /&gt;    r     b     w      avm    free   re   at    pi   po    fr   de    sr     in     sy    cs  us sy id&lt;BR /&gt;    2     0     0  2017996   15424  401  150    11    0     0    0    12   1162  28140  3665   9  7 84&lt;BR /&gt;&lt;BR /&gt;po = 0.&lt;BR /&gt;&lt;BR /&gt;Therefore, adding physical memory I would have to resolve the problem?</description>
      <pubDate>Tue, 30 Jan 2007 10:58:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931067#M610376</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-30T10:58:32Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931068#M610377</link>
      <description>Based on the vmstat numbers, po is too small to be a concern. But this is a single measurement and the system appears not to be paging at all. So the applications may be highly interactive which means that the page outs took place at some other time. It is quite possible to run a lot of applications that won't all fit into memory at the same time, as long as they aren't computing at the same time. Once all the apps start computing, the po rate will skyrocket and performance will suffer. If the application is primarily compute-bound (very little disk I/O) then not much can be done to improve performance except a rewrite or faster CPUs.</description>
      <pubDate>Tue, 30 Jan 2007 20:11:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931068#M610377</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2007-01-30T20:11:47Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931069#M610378</link>
      <description>&amp;gt;"Heap analysis is not enabled now." To enable?&lt;BR /&gt;&lt;BR /&gt;I assume you figured this out because you produced some output in your next message?&lt;BR /&gt;&lt;BR /&gt;Where is the results for the other system?&lt;BR /&gt;&lt;BR /&gt;The total from what you provide is only 10.9 Mb.  Where is the 400 Mb example?&lt;BR /&gt;&lt;BR /&gt;If you want more details on each, you can use "info heap #".  This will give a call stack for each.  You may need to increase the depth if that isn't enough.&lt;BR /&gt;&lt;BR /&gt;1 2547160 28945 0x40f49700 __libc_mutex_alloc&lt;BR /&gt;8 230296 2617 0x406891b0 __libc_mutex_alloc&lt;BR /&gt;47 5632 1 0x4053c640 __libCsup_mutex_init&lt;BR /&gt;&lt;BR /&gt;The first two if these can be greatly reduced if the all come from C++ strings.  Just compile with -D__HPACC_FIXED_REFCNT_MUTEX:&lt;BR /&gt;&lt;A href="http://www.docs.hp.com/en/5991-4872/ch01s03.html#chdgeeid" target="_blank"&gt;http://www.docs.hp.com/en/5991-4872/ch01s03.html#chdgeeid&lt;/A&gt;</description>
      <pubDate>Tue, 30 Jan 2007 21:15:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931069#M610378</guid>
      <dc:creator>Dennis Handly</dc:creator>
      <dc:date>2007-01-30T21:15:14Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931070#M610379</link>
      <description>In the info heap list, the biggest allocated memory is #0. The command info heap 0, returns:&lt;BR /&gt;&lt;BR /&gt;(gdb) info heap 0&lt;BR /&gt;9357656 bytes in 106337 blocks (39.57% of all bytes allocated)&lt;BR /&gt;These range in size from 88 to 88 bytes and are allocated&lt;BR /&gt;#0  __libc_mutex_alloc() from /lib/libc.2&lt;BR /&gt;#1  HPMutexWrapper::init(void)() from /lib/libstd.2&lt;BR /&gt;#2  basic_string&lt;CHAR&gt;,allocator&amp;gt;::getRep(unsigned long, unsigned long)() at /sw_common/aCC-3.31/aCC/include/string.cc:112&lt;BR /&gt;#3  basic_string&lt;CHAR&gt;,allocator&amp;gt;::replace(unsigned long, unsigned long, char const *, unsigned long, unsigned long, unsigned long)() at /sw_common/aCC-3.31/aCC/include/string.cc:459&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;(gdb) info heap 1&lt;BR /&gt;4055040 bytes in 30 blocks (17.15% of all bytes allocated)&lt;BR /&gt;These range in size from 135168 to 135168 bytes and are allocated&lt;BR /&gt;#0  __alloc_stack() from /lib/libpthread.1&lt;BR /&gt;#1  __pthread_id_lookup() from /lib/libpthread.1&lt;BR /&gt;#2  __pthread_create_system() from /lib/libpthread.1&lt;BR /&gt;#3  thrThreadServerPOSIX::StartThread(void)() from /opt/tcsi/osp/5.4/hpux1111/lib/libOspThreads.sl&lt;BR /&gt;&lt;BR /&gt;The allocated memory is not a lot, but it grows little for time.&lt;BR /&gt;&lt;BR /&gt;I have compiled the processes with the -D__HPACC_FIXED_REFCNT_MUTEX directive and I'm looking the behaviour.&lt;/CHAR&gt;&lt;/CHAR&gt;</description>
      <pubDate>Wed, 31 Jan 2007 10:00:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931070#M610379</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-01-31T10:00:55Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931071#M610380</link>
      <description>How about share memory area? &lt;BR /&gt;ipcs -mob to check if there are any segment with zero process attached.</description>
      <pubDate>Wed, 31 Jan 2007 12:46:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931071#M610380</guid>
      <dc:creator>John Guster</dc:creator>
      <dc:date>2007-01-31T12:46:15Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931072#M610381</link>
      <description>&amp;gt;I have compiled the processes with the -D__HPACC_FIXED_REFCNT_MUTEX directive and I'm looking the behaviour.&lt;BR /&gt;&lt;BR /&gt;This should remove all of #0.  The previous #47 is the fixed pool of 64 mutexes.  You might look at #8.&lt;BR /&gt;&lt;BR /&gt;__alloc_stack is used to allocate thread stacks.</description>
      <pubDate>Wed, 31 Jan 2007 17:23:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931072#M610381</guid>
      <dc:creator>Dennis Handly</dc:creator>
      <dc:date>2007-01-31T17:23:10Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931073#M610382</link>
      <description>For John Guster.&lt;BR /&gt;The output of command:&lt;BR /&gt;&lt;BR /&gt;IPAHU028 &amp;gt; ipcs -mob&lt;BR /&gt;IPC status from /dev/kmem as of Thu Feb  1 08:20:23 2007&lt;BR /&gt;T         ID     KEY        MODE        OWNER     GROUP NATTCH      SEGSZ&lt;BR /&gt;Shared Memory:&lt;BR /&gt;m          0 0x411865f3 --rw-rw-rw-      root      root      0        348&lt;BR /&gt;m          1 0x4e0c0002 --rw-rw-rw-      root      root      1      61760&lt;BR /&gt;m          2 0x411c06df --rw-rw-rw-      root      root      1       8192&lt;BR /&gt;m       6735 0x0c6629c9 --rw-r-----      root       sys      1   33657808&lt;BR /&gt;m          4 0x06347849 --rw-rw-rw-      root      root      0      77384&lt;BR /&gt;m        617 0x4910c3e3 --rw-r--r--      root      root      0      22908&lt;BR /&gt;m      50190 0x5e14003f --rw-------      root      root      1        512&lt;BR /&gt;m      29995 0x00000000 D-rw-------      root      root      8    1052672&lt;BR /&gt;m      11636 0x00000000 D-rw-------       www     other      8     184324&lt;BR /&gt;m       7353 0x0654b034 --rw-rw----      root       sys    108  337088512&lt;BR /&gt;m       2458 0xe8ba27c4 --rw-rw----      root       sys    104  722964480&lt;BR /&gt;m       4907 0x353be1d0 --rw-rw----      root       sys     63  337088512&lt;BR /&gt;m      10416 0x32e46938 --rw-rw----      root       sys     73  190812160&lt;BR /&gt;m       9193 0xf5f50cc8 --rw-rw----      root       sys     37   72323072&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;For Dennis Handly.&lt;BR /&gt;You can be explained?&lt;BR /&gt;Excuse me but I not use these tools often.&lt;BR /&gt;&lt;BR /&gt;I must change the environment variable aCC_MUTEX_ARRAY_SIZE?</description>
      <pubDate>Thu, 01 Feb 2007 02:34:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931073#M610382</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-02-01T02:34:16Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931074#M610383</link>
      <description>&amp;gt;I must change the environment variable aCC_MUTEX_ARRAY_SIZE?&lt;BR /&gt;&lt;BR /&gt;How many CPUs do you have?  The default is 64 to make sure there isn't too much contention when the runtime randomly shares mutexes.</description>
      <pubDate>Thu, 01 Feb 2007 03:00:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931074#M610383</guid>
      <dc:creator>Dennis Handly</dc:creator>
      <dc:date>2007-02-01T03:00:46Z</dc:date>
    </item>
    <item>
      <title>Re: Growing process memory</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931075#M610384</link>
      <description>The number of CPU is 2 or 4.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The output of info heap today is:&lt;BR /&gt;&lt;BR /&gt;No.   Total bytes     Blocks     Address     Function&lt;BR /&gt;0        290869526      3870584  0x4eb0ba90   ???()&lt;BR /&gt;1        9357656        106337  0x411e3440   __libc_mutex_alloc()&lt;BR /&gt;2        4055040        30      0x626bb000   __alloc_stack()&lt;BR /&gt;3        3355456        4269    0x415d42d0   operator new(unsigned long)()&lt;BR /&gt;4        1253305        47676   0x41ac9998   operator new(unsigned long)()&lt;BR /&gt;5        947422         36419   0x42561ac8   operator new(unsigned long)()&lt;BR /&gt;6        569672         17928   0x40f7ab58   operator new(unsigned long)()&lt;BR /&gt;7        372448         48      0x41015c90   operator new(unsigned long)()&lt;BR /&gt;8        370494         1       0x407050f0   sktsfMalloc()&lt;BR /&gt;9        344960         308     0x4100e120   operator new(unsigned long)()&lt;BR /&gt;10       281764         6       0x40769910   lxldalc()&lt;BR /&gt;....&lt;BR /&gt;&lt;BR /&gt;With "info heap 0":&lt;BR /&gt;&lt;BR /&gt;(gdb) info heap 0&lt;BR /&gt;290869526 bytes in 3870584 blocks (92.49% of all bytes allocated)&lt;BR /&gt;These range in size from 4 to 34812 bytes and are allocated&lt;BR /&gt;&lt;BR /&gt;How I verify this memory?</description>
      <pubDate>Thu, 01 Feb 2007 03:16:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/growing-process-memory/m-p/3931075#M610384</guid>
      <dc:creator>Benedetto Mangiapane</dc:creator>
      <dc:date>2007-02-01T03:16:39Z</dc:date>
    </item>
  </channel>
</rss>

