- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- mmap and openvms
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-11-2003 11:40 AM
тАО05-11-2003 11:40 AM
I'm trying to port the Electric Fence library from Linux to OpenVMS. Electric Fence is a simple malloc/free debugger. The library uses the 'mmap' and 'mprotect' calls.
I can use the library (statically linked) with my test-programs that allocate small amounts of memory.
I run into trouble when I try to allocate some *more* memory and there seems to be a limit around 200Mb. This can be a result of many small chunks or a few large chunks.
What puzzles me is that 'mmap' returns MAP_FAILED with 'errno' == -1, which is not documented.
Are there any limitation on how many times I may call 'mmap' ?
Can anyone here explain the error ?
I'm running this on a machine OpenVMS 7.3, with 1 Gbyte of RAM and lot's of paging file / swap file free !
Below you the the actual call, somewhat modified from the original in Electric Fence...
allocation = (caddr_t) mmap(
NULL
,(int)size
,PROT_WRITE
,MAP_PRIVATE|MAP_ANONYMOUS ,-1
,0);
Regards,
- Ingvaldur
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2003 06:45 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2003 08:34 AM
тАО05-12-2003 08:34 AM
Re: mmap and openvms
I increased the Pgflquo from 300000 -> 3000000 and thus, could allocate a lot more memory than before :-)
I again hit the roof after allocating 65298 * 1024kb chunks but now I know that adjusting the Pgflquo and similar parameters can affect the results.
Thanks a lot.
Regards,
- Ingvaldur
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-12-2003 11:12 PM
тАО05-12-2003 11:12 PM
Re: mmap and openvms
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-19-2003 07:56 AM
тАО05-19-2003 07:56 AM
Re: mmap and openvms
> I increased the Pgflquo from 300000 -> 3000000 and thus, could allocate a lot more memory than before :-)
Good.
> I again hit the roof after allocating 65298 * 1024kb chunks but now I know that adjusting the Pgflquo and similar parameters can affect the results.
Yeah, well, the knob this times is GBLSECTIONS in SYSGEN,
but you reached the hard max of 65535 system wide.
Each mmap creates a '(shared/global) memory sections. OpenVMS applicaiton tend to use those by the hundreds, not thousands.
See SYS$CRMPSC and friends in teh system service manual.
Your usage was not anticipated and you may want to look for alternative, more effective, solutions to create holes in virtual memory space. Maybe really create guard pages by changin protection? Maybe pre-allocate a large chunck and SYS$DELTVA pages in teh middle? Or maybe, just maybe, the problem has already been solved as Hoff points out with the debugger suggestion.
hth,
Hein.
(display from old 7.1 system)
$ mcr sysgen
SYSGEN> SHOW GBLSECTIONS
Parameter Name Current Default Min. Max. Unit Dynamic
-------------- ------- ------- ------- ------- ---- -------
GBLSECTIONS 631 250 80 65535 Sectio
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2003 12:25 AM
тАО06-18-2003 12:25 AM
Re: mmap and openvms
A long time ago I wrote a FORTRAN program that built up an array allocating memory for each element seperately, as required. If you don't take precautions, it won't be contiguous so you cannot access the data as an array - what was required.
I found a solution in the following method:
A call to LIB$CREATE_VM_ZONE creates a zone of contigous memory of a size YOU specify. It returns an "ident". in fact, it just creates the address-space but dows NOT allocate memory yet. More: it's virtual addresses - and can therefore range up to 2G (or more), at lkeast far more than you can do with global sections.
Then allocate memory using LIB$GET_VM, where the last parameter is the ident just returned.
By that, all your data is in ONE memory area, and if all chunks are the same size, you can access the data as an array - specifying the address of the first element as address of this array.
Nice side-effect: you don't have to release each element seperately. Just call LIB$FREE_VM_ZONE which will remove the complete area - and hence free ALL allocated memory in one call.
Drawback: It's process-local, so you need another mechanism for passing this data to other programs. furthermore, debugging is quite a task but can be done - if you know where to look.
OpenVMS Developer & System Manager