<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Memory leak in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135663#M26063</link>
    <description>It seems that nobody can conclude something based upon the cpu sampling.&lt;BR /&gt;&lt;BR /&gt;As 750 MB is taken in 2 minutes, giving PGFLQ will only extend life of the process a little. So we restart it when it reaches 1 GB. And hope the exchanges will calm again (better for my money too).&lt;BR /&gt;&lt;BR /&gt;Wim</description>
    <pubDate>Wed, 22 Oct 2008 14:28:15 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2008-10-22T14:28:15Z</dc:date>
    <item>
      <title>Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135633#M26033</link>
      <description>We have a program running on a 2 CPU GS160.&lt;BR /&gt;It's not changed since 2003 and is doing TCP (Reuters Sink) and DECnet (other VMS nodes) communications.&lt;BR /&gt;&lt;BR /&gt;Since a few days the load is heavy due to the heavy activity on the stock exchanges. Under this heavy load, it starts consuming a lot more memory and after a few minutes it goes out of memory (normally +- 600 MB for the whole process tree, now going to 1500 MB).&lt;BR /&gt;The process tree is restarted every day and each time the problem comes back.&lt;BR /&gt;&lt;BR /&gt;I included the PSDC samping report taken when the process goes from 750 MB to 1500 MB. The process is named FOE_RGS_SRV (and consumes the cpu together with FOE_POS_SRV).&lt;BR /&gt;&lt;BR /&gt;Is anyone able to make something of it ?&lt;BR /&gt;(VMS 7.3, TCP 5.3 eco 2, decnet 7.3 eco 3)&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 14 Oct 2008 14:39:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135633#M26033</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-10-14T14:39:36Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135634#M26034</link>
      <description>Before you find the leak, you can add in your program a regular (every 2 hours ?) call to $purgws. Be prepared for more pagefile utilisation.&lt;BR /&gt;&lt;BR /&gt;Of course this is a temporary workaround.</description>
      <pubDate>Tue, 14 Oct 2008 15:03:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135634#M26034</guid>
      <dc:creator>labadie_1</dc:creator>
      <dc:date>2008-10-14T15:03:38Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135635#M26035</link>
      <description>What's the QBB/CPU/memory layout?  Do you use global pages in the process tree?&lt;BR /&gt;&lt;BR /&gt;Starting in 7.3, global pages are mapped allocating pages across QBBs.  This led to an application running multiple processes against global pages creating heavily fragmented global allocations during the day.  Eventually, performance deteriorated and the application hung requiring a restart.  &lt;BR /&gt;&lt;BR /&gt;HP recommended application changes to way global pages were allocated.  The end solution adopted was moving to GS-1280s.  Updating the hardware layout may mitigate the issue in your case.&lt;BR /&gt;&lt;BR /&gt;Andy</description>
      <pubDate>Tue, 14 Oct 2008 15:23:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135635#M26035</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2008-10-14T15:23:14Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135636#M26036</link>
      <description>An application that's been running perfectly since 2003 is not an application that is immune to latent bugs.  &lt;BR /&gt;&lt;BR /&gt;Application (and system) load is one of the classic and salient triggers for exposing bugs and race conditions and leaks.&lt;BR /&gt;&lt;BR /&gt;Identify what memory resource(s) are leaking, and work from there.   This can involve digging around in the process data structures, and in the process address range.  (If restarting the application cures these, then it's usually a process private leak.  That doesn't, however, mean it's your code or HP code.)&lt;BR /&gt;&lt;BR /&gt;Your attachment shows PC samplings, and those are not on point for a memory leak; there's not a correlation between cold or hot PC ranges and memory use.  Yes, you do have to access the range to get the leak, but the range of code doesn't have to be hot.   &lt;BR /&gt;&lt;BR /&gt;Small leaks in hot code and big leaks in cold code can ruin your uptime statistics.  And nothing says there is just one leak.  Though big leaks in hot code are usually pretty obvious.&lt;BR /&gt;&lt;BR /&gt;There have been various leaks in OpenVMS and TCP/IP Services and other products remediated over the years; if you're not current...</description>
      <pubDate>Tue, 14 Oct 2008 15:51:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135636#M26036</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-10-14T15:51:06Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135637#M26037</link>
      <description>Please explain to your management that it is time to 'pay the piper'.&lt;BR /&gt;&lt;BR /&gt;This is what you get for sticking with an old OS version on an old platform.&lt;BR /&gt;&lt;BR /&gt;Good luck! (you'l need some :-)&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Hein &lt;BR /&gt;(geen punten nodig voor dit advies :-(</description>
      <pubDate>Wed, 15 Oct 2008 00:33:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135637#M26037</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-10-15T00:33:17Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135638#M26038</link>
      <description>Then I'll ask a different question.&lt;BR /&gt;&lt;BR /&gt;52$ of the time is taken by SYSTEM_PRIMITIVES_MIN / MMG_STD$ALLOC_SYSTE&lt;BR /&gt;&lt;BR /&gt;What is this ? It can't be that the program is in "alloc" for 52% of the time, I hope.&lt;BR /&gt;&lt;BR /&gt;The problem is that it will be difficult to get a correction for a few months that are left. So, if it could be solved by a reboot I would be very happy.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 15 Oct 2008 05:30:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135638#M26038</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-10-15T05:30:50Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135639#M26039</link>
      <description>A reboot won't help much if the problem is within the software. It will keep consuming memory until you boot it again unless you find the hole and fix it.&lt;BR /&gt;Memory fragmentation might be one cause; but given your description, I would look for the code executed in those new minutes and concentrate in the code tree that is executed. also check for asymnconous code that may get triggered and allocates chunks of memeory.</description>
      <pubDate>Wed, 15 Oct 2008 11:47:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135639#M26039</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2008-10-15T11:47:22Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135640#M26040</link>
      <description>&lt;!--!*#--&gt;&lt;BR /&gt;If your application is using LIB$GET_VM / LIB$FREE_VM, a memory leak bug in LIBRTL.EXE&lt;BR /&gt;may be the cause, it got fixed in VMS 7.3-2 LIBRTL ECO 2, I don't know if there is an ECO for VMS 7.3&lt;BR /&gt;&lt;BR /&gt;A LIB$GET_VM may expand the process region when there are sufficient contiguous bytes in the memory zone to satisfy the request. &lt;BR /&gt;&lt;BR /&gt;&lt;A href="ftp://ftp.itrc.hp.com/openvms_patches/alpha/V7.3-2/VMS732_LIBRTL-V0200.txt" target="_blank"&gt;ftp://ftp.itrc.hp.com/openvms_patches/alpha/V7.3-2/VMS732_LIBRTL-V0200.txt&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;You may find the PQUOTA tool useful for analyzing memory leaks, latest version V2.0 runs on VAX, Alpha and Itanium,&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://vms.process.com/scripts/fileserv/fileserv.com?PQUOTA" target="_blank"&gt;http://vms.process.com/scripts/fileserv/fileserv.com?PQUOTA&lt;/A&gt;</description>
      <pubDate>Wed, 15 Oct 2008 12:18:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135640#M26040</guid>
      <dc:creator>kari salminen</dc:creator>
      <dc:date>2008-10-15T12:18:10Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135641#M26041</link>
      <description>There are two choices: either find and fix the leak, or use the $purgws/restart sequence and a large pagefile and/or a reboot when the stock market gets busy.&lt;BR /&gt;&lt;BR /&gt;It may be cheaper to throw some disk storage (pagefile) and some quota and some memory at this case; to buy enough headroom for the daily restart.&lt;BR /&gt;&lt;BR /&gt;If following the former path, the intrepid explorer needs to first find what structures are being leaked.  This can be through examination or through instrumentation.&lt;BR /&gt;&lt;BR /&gt;One obvious variation here is to speed up the migration off of OpenVMS.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Oct 2008 14:13:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135641#M26041</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-10-15T14:13:23Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135642#M26042</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;As Hoff mentioned, "long running" does not necessarily imply "no latent problems".&lt;BR /&gt;&lt;BR /&gt;Without knowing how the application is structured, it is difficult to guess. Having done similar applications in the past, I can see many situations where such a thing could happen. &lt;BR /&gt;&lt;BR /&gt;In this situation, I would suggest both palliative measures and a longer-term fix. For palliative measures, a larger page file is definitely a start, and an automated restart at a quiet time. &lt;BR /&gt;&lt;BR /&gt;For a longer term fix, it might be useful to get one or more sets of process dumps to identify the nature of the "memory leak". It is not unlikely that it is a small, discrete fix.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Wed, 15 Oct 2008 14:33:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135642#M26042</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2008-10-15T14:33:10Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135643#M26043</link>
      <description>When I last saw a problem like this on my system (and referencing the Allocate System Memory) function, it was caused by something growing its working set because of a string processing issue.&lt;BR /&gt;&lt;BR /&gt;By any chance are those two programs written in a language for which the string paradigm is that a string is actually a descriptor that points somewhere in the program heap?  (As opposed to FORTRAN-like, where strings are pre-allocated and fixed length.)&lt;BR /&gt;&lt;BR /&gt;The problem was "thrashing" the program's scratchpad.  You would probably also see a sudden increase in paging/swapping activity just before this problem reared its ugly head.&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Oct 2008 16:19:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135643#M26043</guid>
      <dc:creator>Richard W Hunt</dc:creator>
      <dc:date>2008-10-15T16:19:56Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135644#M26044</link>
      <description>Richard is referring to modifications to a dynamic string descriptor; erroneously writing to the descriptor as if it were a static descriptor.&lt;BR /&gt;&lt;BR /&gt;That sequence can certainly result in memory loss, but it's typically a sort of more continuous leak.  That class of bug is  not (usually) a load-activated bug, though that class of bug could easily be secondary to another bug.&lt;BR /&gt;&lt;BR /&gt;I posted the general code review list over in &lt;A href="http://h71000.www7.hp.com/wizard/wiz_1661.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_1661.html&lt;/A&gt; and some other threads referenced there.&lt;BR /&gt;&lt;BR /&gt;Do ramp up on the new platform, too -- whatever that might be.  Life's too short to stay grumpy, and I'm inferring you've got a case of the grumpies today.  :-)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Oct 2008 16:31:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135644#M26044</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-10-15T16:31:17Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135645#M26045</link>
      <description>You've said nothing about the architecture of the application but that might be the cause of the problem.  &lt;BR /&gt;&lt;BR /&gt;Is it AST driven with a lot of network I/O and using a ring buffer that might be stressed by a heavy load?  It's feasible that an initial buffer allocation is insufficient and further allocations are made, but the "expand" flag is not being cleared after an expansion is made.  Perhaps previous input rates have never been enough to trigger this action (i.e. the buffer contents are processed fast enough so that the buffer never needed expansion) and the bug has not previously been exposed.  If you are really lucky you'll have monitoring tools that tell you how many buffers are allocated and used.&lt;BR /&gt;&lt;BR /&gt;Of course I might be barking up the wrong tree because I'm only guessing at how an application that processes Borse data might be structured.</description>
      <pubDate>Thu, 16 Oct 2008 02:06:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135645#M26045</guid>
      <dc:creator>John McL</dc:creator>
      <dc:date>2008-10-16T02:06:06Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135646#M26046</link>
      <description>A tool like Callmon may help you &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://mvb.saic.com/freeware/vmslt96a/lelegard/callmon_.ada" target="_blank"&gt;http://mvb.saic.com/freeware/vmslt96a/lelegard/callmon_.ada&lt;/A&gt;</description>
      <pubDate>Thu, 16 Oct 2008 06:05:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135646#M26046</guid>
      <dc:creator>labadie_1</dc:creator>
      <dc:date>2008-10-16T06:05:40Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135647#M26047</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;is the application using PTHREADS ? Use ANAL/SYS and SDA&amp;gt; SET PROC FOE_RGS_SRV&lt;BR /&gt;Then try SDA&amp;gt; PTHREAD VM&lt;BR /&gt;&lt;BR /&gt;If there is no error message, because the process is not threaded, is there any lookaside list with a substantial amount of packets ?&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 16 Oct 2008 11:55:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135647#M26047</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2008-10-16T11:55:57Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135648#M26048</link>
      <description>You might also find the Tech Journal No. 7 article "Faking it with Open VMS Shareable Images" by John Gillings useful especially if you think the problem might be an errant LIB$GET_VM call.&lt;BR /&gt;&lt;BR /&gt;The article is online at &lt;A href="http://h71000.www7.hp.com/openvms/journal/v7/faking_it_with_openvms_shareable_images.html" target="_blank"&gt;http://h71000.www7.hp.com/openvms/journal/v7/faking_it_with_openvms_shareable_images.html&lt;/A&gt; and the code seems to be downloadable.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 16 Oct 2008 23:37:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135648#M26048</guid>
      <dc:creator>John McL</dc:creator>
      <dc:date>2008-10-16T23:37:59Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135649#M26049</link>
      <description>All I know it's written in C.&lt;BR /&gt;&lt;BR /&gt;Volker : pthread is unknown command in 7.3. Show proc/thr says "1 thread". May be retry when NYSE opens.&lt;BR /&gt;&lt;BR /&gt;Nice article of John G. We used something simular on HP3000 to fool the verification of the license date of a certain product.&lt;BR /&gt;&lt;BR /&gt;Labadie : ada ...&lt;BR /&gt;&lt;BR /&gt;The applidcation boys will not look at the code and we now restart the process when it gets mad. BTW : when it gets mad it only takes a few minutes before the 1.5 GB is taken. Increasing it would only delay the problem a little.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 20 Oct 2008 07:33:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135649#M26049</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-10-20T07:33:06Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135650#M26050</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;the SYS$SHARE:PTHREAD$SDA.EXE extension should be available since OpenVMS V7.2-1 ...&lt;BR /&gt;&lt;BR /&gt;You can also check with SDA&amp;gt; SHOW PROC/CHAN, if PTHREAD$RTL is an activated image for this process.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Mon, 20 Oct 2008 07:51:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135650#M26050</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2008-10-20T07:51:13Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135651#M26051</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;"The applidcation boys will not look at the code and we now restart the process when it gets mad. BTW : when it gets mad it only takes a few minutes before the 1.5 GB is taken. Increasing it would only delay the problem a little."&lt;BR /&gt;&lt;BR /&gt;I have had a few of those at clients over the years. Regrettably, the solution has often been to identify the failing code independently, and propose a fix. Not the best way to work, but it can be the most effective way to deal with organizational politics.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Mon, 20 Oct 2008 08:13:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135651#M26051</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2008-10-20T08:13:11Z</dc:date>
    </item>
    <item>
      <title>Re: Memory leak</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135652#M26052</link>
      <description>Bob,&lt;BR /&gt;&lt;BR /&gt;Did that once (network connection was not closed). But after several years it's still not in production because of testing requirements.&lt;BR /&gt;&lt;BR /&gt;In November we will have DRP tests and then I can reboot the node. May be it gets solved that way.&lt;BR /&gt;&lt;BR /&gt;Volker,&lt;BR /&gt;&lt;BR /&gt;I shortened the command to PTHR but this didn't work. When I typed it in full it worked. Sorry. Same conclusion : 1 thread.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 20 Oct 2008 10:47:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/memory-leak/m-p/5135652#M26052</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-10-20T10:47:12Z</dc:date>
    </item>
  </channel>
</rss>

