<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: mmap and openvms in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970295#M30928</link>
    <description>The first thing I'd look at is PGFLQUO.  Check your peak virtual size using 'show proc/acc'.&lt;BR /&gt;Regards, /jeff</description>
    <pubDate>Mon, 12 May 2003 13:45:43 GMT</pubDate>
    <dc:creator>Jeff Chisholm</dc:creator>
    <dc:date>2003-05-12T13:45:43Z</dc:date>
    <item>
      <title>mmap and openvms</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970294#M30927</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I'm trying to port the Electric Fence library from Linux to OpenVMS. Electric Fence is a simple malloc/free debugger. The library uses the 'mmap' and 'mprotect' calls.&lt;BR /&gt;&lt;BR /&gt;I can use the library (statically linked) with my test-programs that allocate small amounts of memory. &lt;BR /&gt;&lt;BR /&gt;I run into trouble when I try to allocate some *more* memory and there seems to be a limit around 200Mb. This can be a result of many small chunks or a few large chunks.&lt;BR /&gt;&lt;BR /&gt;What puzzles me is that 'mmap' returns MAP_FAILED with 'errno' == -1, which is not documented.&lt;BR /&gt;&lt;BR /&gt;Are there any limitation on how many times I may call 'mmap' ?&lt;BR /&gt;&lt;BR /&gt;Can anyone here explain the error ?&lt;BR /&gt;&lt;BR /&gt;I'm running this on a machine OpenVMS 7.3, with 1 Gbyte of RAM and lot's of paging file / swap file free !&lt;BR /&gt;&lt;BR /&gt;Below you the the actual call, somewhat modified from the original in Electric Fence...&lt;BR /&gt;&lt;BR /&gt; allocation = (caddr_t) mmap(&lt;BR /&gt; NULL&lt;BR /&gt; ,(int)size&lt;BR /&gt; ,PROT_WRITE&lt;BR /&gt; ,MAP_PRIVATE|MAP_ANONYMOUS  ,-1&lt;BR /&gt; ,0);&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;- Ingvaldur</description>
      <pubDate>Sun, 11 May 2003 18:40:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970294#M30927</guid>
      <dc:creator>Ingvaldur Sigurjonsson</dc:creator>
      <dc:date>2003-05-11T18:40:50Z</dc:date>
    </item>
    <item>
      <title>Re: mmap and openvms</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970295#M30928</link>
      <description>The first thing I'd look at is PGFLQUO.  Check your peak virtual size using 'show proc/acc'.&lt;BR /&gt;Regards, /jeff</description>
      <pubDate>Mon, 12 May 2003 13:45:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970295#M30928</guid>
      <dc:creator>Jeff Chisholm</dc:creator>
      <dc:date>2003-05-12T13:45:43Z</dc:date>
    </item>
    <item>
      <title>Re: mmap and openvms</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970296#M30929</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I increased the Pgflquo from 300000 -&amp;gt; 3000000 and thus, could allocate a lot more memory than before :-)&lt;BR /&gt;&lt;BR /&gt;I again hit the roof after allocating 65298 * 1024kb chunks but now I know that adjusting the Pgflquo and similar parameters can affect the results.&lt;BR /&gt;&lt;BR /&gt;Thanks a lot.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;- Ingvaldur</description>
      <pubDate>Mon, 12 May 2003 15:34:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970296#M30929</guid>
      <dc:creator>Ingvaldur Sigurjonsson</dc:creator>
      <dc:date>2003-05-12T15:34:53Z</dc:date>
    </item>
    <item>
      <title>Re: mmap and openvms</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970297#M30930</link>
      <description>Have you seen the debugger's built-in heap analyzer support?</description>
      <pubDate>Tue, 13 May 2003 06:12:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970297#M30930</guid>
      <dc:creator>Stephen Hoffman</dc:creator>
      <dc:date>2003-05-13T06:12:02Z</dc:date>
    </item>
    <item>
      <title>Re: mmap and openvms</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970298#M30931</link>
      <description>&lt;BR /&gt;&lt;BR /&gt;                               &amp;gt; I increased the Pgflquo from 300000 -&amp;gt; 3000000 and thus, could allocate a lot more memory than before :-) &lt;BR /&gt;&lt;BR /&gt;Good.&lt;BR /&gt;                               &amp;gt; I again hit the roof after allocating 65298 * 1024kb chunks but now I know that adjusting the Pgflquo and similar parameters can affect the results. &lt;BR /&gt;&lt;BR /&gt;Yeah, well, the knob this times is GBLSECTIONS in SYSGEN,&lt;BR /&gt;but you reached the hard max of 65535 system wide.&lt;BR /&gt;&lt;BR /&gt;Each mmap creates a '(shared/global) memory sections. OpenVMS applicaiton tend to use those by the hundreds, not thousands.&lt;BR /&gt;See SYS$CRMPSC and friends in teh system service manual.&lt;BR /&gt;Your usage was not anticipated and you may want to look for alternative, more effective, solutions to create holes in virtual memory space. Maybe really create guard pages by changin protection? Maybe pre-allocate a large chunck and SYS$DELTVA pages in teh middle? Or maybe, just maybe, the problem has already been solved as Hoff points out with the debugger suggestion.&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;    Hein.&lt;BR /&gt;&lt;BR /&gt;(display from old 7.1 system)&lt;BR /&gt;$ mcr sysgen&lt;BR /&gt;SYSGEN&amp;gt;  SHOW GBLSECTIONS&lt;BR /&gt;Parameter Name           Current    Default     Min.      Max.     Unit  Dynamic&lt;BR /&gt;--------------           -------    -------    -------   -------   ----  -------&lt;BR /&gt;GBLSECTIONS                   631        250        80      65535 Sectio</description>
      <pubDate>Mon, 19 May 2003 14:56:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970298#M30931</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2003-05-19T14:56:38Z</dc:date>
    </item>
    <item>
      <title>Re: mmap and openvms</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970299#M30932</link>
      <description>As Hein stated: look for an alternative.&lt;BR /&gt;A long time ago I wrote a FORTRAN program that built up an array allocating memory for each element seperately, as required. If you don't take precautions, it won't be contiguous so you cannot access the data as an array - what was required.&lt;BR /&gt;I found a solution in the following method:&lt;BR /&gt;A call to LIB$CREATE_VM_ZONE creates a zone of contigous memory of a size YOU specify. It returns an "ident". in fact, it just creates the address-space but dows NOT allocate memory yet. More: it's virtual addresses - and can therefore range up to 2G (or more), at lkeast far more than you can do with global sections.&lt;BR /&gt;Then allocate memory using LIB$GET_VM, where the last parameter is the ident just returned.&lt;BR /&gt;By that, all your data is in ONE memory area, and if all chunks are the same size, you can access the data as an array - specifying the address of the first element as address of this array.&lt;BR /&gt;Nice side-effect: you don't have to release each element seperately. Just call LIB$FREE_VM_ZONE which will remove the complete area - and hence free ALL allocated memory in one call.&lt;BR /&gt;Drawback: It's process-local, so you need another mechanism for passing this data to other programs. furthermore, debugging is quite a task but can be done - if you know where to look.&lt;BR /&gt;</description>
      <pubDate>Wed, 18 Jun 2003 07:25:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mmap-and-openvms/m-p/2970299#M30932</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2003-06-18T07:25:30Z</dc:date>
    </item>
  </channel>
</rss>

