- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- P0 space of memory leak
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-19-2008 09:38 PM
02-19-2008 09:38 PM
P0 space of memory leak
I have some problem with the P0 space leaking memory.
I am using sys$mgblsc function to map global section and sys$deltva to free up that memory space. Code flow is as follows:
main()
{
1 RECALL:
2 [Wait for the request from user]
/* Got the request from user */
3
/* First time memory will not be mapped so the following statement will not be executed */
4 SYS$DELTVA (pc_gs_bounds, retadr, 0 )
/* Map the memory */
5 sys$mgblsc
status = SYS$MGBLSC ( inadr,
pc_gs_bounds, 0,
sect_mask,
&cache_gs_ds,
gs_ident, 0 ) ;
6 [ Now execute the remaining part of the code ]
7 [ Everthing done, goto RECALL ]
}
I am running the debug image of the code along with heap analyzer and I am having some confusing results:
Giving request for FIRST Time:
=============================
Program counter is at 2 and it is waiting for the request from user, I entered the request it checks the whether the memory is mapped or not ( at 3) since for the first time it is not mapped so 4 will not be executed and it comes directly to 5. After the exectuion of 5 global section section of memory is mapped onto the P0 space of the process.
From Heap Analyzer:
Mapped at following location:
00FB8000+0E2C2000=0F27A000 SYS$MGBLSC - "XXXXXXXX" (P0 Region).
From f$getjpi(pid,"FREP0VA")
Before step 5: 00A8F4DE
After step 5: 0F2E0000
Value of pc_gs_bounds[0:1] from debug image:
[0]: 16482304
[1]: 254255103
After this execution of the program completes and it again goes at 2, waiting for request from user. Everthing works fine till here.
Giving request for SECOND Time:
==============================
Now I give the request second time. Since memory is mapped the previous time so it goes at step 4. Return value at 4 is SS$_SUCCESS. But I got some contradictory results from heap analyzer and f$getjpi.
From Heap Analyzer:
Memory is unmaped, but
From f$getjpi(pid,"FREP0VA")
It still gives: 0F2E0000
Values of pc_gs_bounds and retadr at 4 are:
decedi$pc_gs_bounds[0:1]
[0]: 16482304
[1]: 254255103
retadr[0:1]
[0]: 16482304
[1]: 254255103
Now it comes at step 5. After execution of step 5, results are as follows:
Form Heap Analyzer:
Memory is again mapped at following location:
0F2E0000+0E2C2000=1D5A2000 SYS$MGBLSC - "XXXXXXXX" (P0 Region).
From f$getjpi(pid,"FREP0VA")
It increases its value and gives: 1D5A2000
Value of pc_gs_bounds[0:1] from debug image:
[0]: 254672896
[1]: 492445695
Can anyone help me to know
1) Where P0 memory is being leaked above. Even after making a call to sys$deltva f$getpi shows 0F2E0000 and after calling sys$mgblsce it increases it value to 1D5A2000.
2) When we are mapping the same global section again then why not it is being mapped at the same location of P0 which we have just freed above at step 4.
Hope I am clear in my questions. Please let me know if any clarification is required or if I have done some mistake.
Regards,
ajaydec
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-20-2008 01:32 AM
02-20-2008 01:32 AM
Re: P0 space of memory leak
please read through this thread
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1198013
which discusses the checkerboarding effect of repated allocation and deallocation of memory in a process.
It should help you understand what you are seeing in your own application.
Duncan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-20-2008 01:36 AM
02-20-2008 01:36 AM
Re: P0 space of memory leak
Given no more info than what you have provided, my GUESS is that sect_mask has the SEC$M_EXPREG bit set.
From the SSREF $MGBLSC description:
"When the SEC$M_EXPREG flag is set, the second inadr longword is ignored, while bit 30 (the second most significant bit) of the first inadr longword is used to determine the region of choice. If the bit is clear, P0 is chosen; if the bit is set, P1 is chosen."
When that bit is set, and the inadr contains a P0 address, the virtual address used begins at the page above the highest current P0 page that is mapped.
It appears that something your program did after the first global section was mapped, allocated some space in P0. Look at the values that were returned in the first call, the highest address 254255103 decimal or 0xF279FFF <- Note high end of a page, when you mapped the second time the low end was at 254672896 decimal or 0xF2E0000 <- Note start of a new page)
So something mapped 417792 bytes (51 Alpha pages) after the address space that you mapped the first time.
237772800 bytes (29025 Alpha Pages) mapped first time
237772800 bytes (29025 Alpha Pages) mapped second time
Are you mapping different global sections each time through? If not, then why are you deleting the virtual address space, just to remap it? If it is a different global section, but every global section is the same size, then clear the SEC$M_EXPREG after the map it the first time, and just reuse the same address space that got returned the first time. If they are not all the same size, then you have a problem. You will have to allocate enough for the largest global section, and just reuse the space starting at the same base address.
Jon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-20-2008 04:27 AM
02-20-2008 04:27 AM
Re: P0 space of memory leak
So OpenVMS grows the end of P0 VA to accomodate the section.
The $DELTVA now unmaps the section freeing the address space but openVMS does not know whether the program intends to re-use that address range to map something else later
Rule: $DELTVA will reduce (shrink) the P0 VA end IF, and ONLY IF, the deltva range ends at the end... because in that case a future remap can always be accomodated.
To verify this do (F$ | SYS$) GETJPI for FREP0VA
What might have allocated memory after your section?
- LIB$GETVM / malloc. Workaround: pre-malloc / getvm all of what the program is expected to need in most circumstances, then immedaitly return for the program
- RMS internal structures (buffers, ifab, irab..). Woraround: increase SYSGEN IMGIOCNT and.or LINK with the IOSEGMENT option large enough for most usage
- RMS global buffers and file statistics blocks. They suffer from the same creep. Fixed in OpenVMS 8.2 where RMS starts to remember the sections = address ranges used.
So... the program may need to learn to remember some VA ranges.
Duncan, I believe that in the problem you reference the DELTVA and FREPoVA does nto play a role.
Hope this helps some,
Hein van den Heuvel (at gmail dot com)
HvdH Performance Consulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-20-2008 04:00 PM
02-20-2008 04:00 PM
Re: P0 space of memory leak
Since you don't have complete control over all allocations in P0 space, you can't predict how your $MGBLSC and $DELTVA will interact with the rest of virtual address space. As Hein has suggested, your intention won't work unless the VA you're deleting remains at the end of P0 space.
It may be better to allocate a chunk of P0 space large enough for your largest expected requirement, then map the global sections as required, reusing the same address space.
Alternatively, move the section into P2 space using $MGBLSC_64 and $DELTVA_64 (though I think I'd leave out the $DELTVA part, as it just makes things more complicated, and you're going to recreate the space immediately anyway).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-20-2008 06:24 PM
02-20-2008 06:24 PM
Re: P0 space of memory leak
Moving to processes (and a network) means you can incrementally expand or contract capacity.