Operating System - OpenVMS

Re: shareable image with dynamic data

Occasional Contributor

shareable image with dynamic data

Is it possible to dynamically allocate memory for data in a shareable image.i.e. can the size of shareable image increase during execution.

Occasional Contributor

Re: shareable image with dynamic data

I am unable to access the data stored in a dynamically allocated variable of a shareable image.Is there any way to retain and access such data ?

chk.h ==>
extern int MAX_VAL;
extern int *CHK_ARR;

chk.c ==> shareable image
#include "chk.h"
MAX_VAL = 100;
CHK_ARR = (int *)malloc(sizeof(int) * 100);

Data Stored
short i;


for(i=0;i CHK_ARR[i] = i;

for(i=0;i printf("\n%d",CHK_ARR[i]);

Honored Contributor

Re: shareable image with dynamic data

Presuming recent OpenVMS versions and C, the use of mmap() and related C standard library calls would be the typical practice here for many applications, and would be platform portable.

For related topics in synchronization and shared access and avoiding bugs that arise with shared memory, there's a chapter or three the programming concepts manual on shared memory operations that are arguably obligatory reading. (Shared memory synchronization errors can be subtle, and the errors can cause customers to be quick to anger, too.)

Beyond moving to C library features and mechanisms, and above the OpenVMS system calls, consider moving to a higher-level layer for these application tasks, too; whether based on 0mq (ZeroMQ) or otherwise. This getting out of doing the lowest of the low-level programming stuff as that whole area tends to be fussy. (Voice of experience here: the bugs in shared-memory code can be quite subtle). Using an available middleware library can make the code vastly easier to implement, deploy and support. And if chosen appropriately, also rather more portable, too.

If you don't want to use portable C interfaces (and examples of mmap are posted around the Internet), then see examples such as this C source code example of using 32-bit and 64-bit global sections and of virtual addressing:


And for a quick intro of using (somewhat rather more problematic, and rather less portable) data commons, see:


If you can't use a higher-level middleware layer, then use mmap and dlopen and C library calls, and (if not) look to use global sections and then (arguably only as a last resort, and position-dependent, and that can require relinking everything when changes are needed) Fortran-style commons; the approach from that old OpenVMS Ask The Wizard topic.

Some other related discussions, of various topics available, including (listed first) an example of using mmap() on Linux platforms:



Occasional Contributor

Re: shareable image with dynamic data

Thank you for the reply.

The problem i am facing is due to data in shareable image being declared as


Over a period of time , data that needs to be stored in the shareable image exceeds the fixed size and the shareable image and entire application needs to be re-created.

Only read access is done to data in the shareable image.

Entire access in application is done by

Please suggest a way to retain this access method as othewise major changes to application will be required

After going through documentation for shareable image , i am still unable to understand why data declared as pointer variables cannot be shared among processes.

Thank you.
Respected Contributor

Re: shareable image with dynamic data

This is not a problem that will easily be handled via this forum as an understanding of the context is necessary. We can provide some guidelines, but this is more of an actual implementation that should be discussed in full. There are many possibilities for solution. For example, perhaps a pointer can be stored rather than the data directly, with the storage area created on the fly by the first process. Global sections come to mind as well.

Perhaps a hint of the amount of data you expect to have and how much you expect it to grow would belp as well.

Honored Contributor

Re: shareable image with dynamic data

Insert the array declaration where the "Counter" variable is declared in this example, and specifically in the sys$scratch:ccmn.h module that gets built by the procedure in this example:



#pragma environment save
#pragma nomember_alignment
#pragma extern_model common_block nopic,shr

struct foo
int Counter;
int MyArray[10];
extern struct foo CCMN_STRUCT;
#pragma environment save

And you'll then have CCMN_STRUCT.MyArray[0] and other such references within the remaining code.

Make that edit, rebuild, and use the VMS debugger (or add some printf statements, or whatever) to have a look at what you've done within the code.

I might infer some discomfort with C programming and C pointers, and that would disqualify some of the more adaptable and flexible techniques that are unavailable here, too.
Jess Goodman
Esteemed Contributor

Re: shareable image with dynamic data

I recommend that you separate out your shareable data into its own shareable image. Shared code should be in read-only shared images.

I like to use Macro-32 to create sharable data images, since it is easy to do 32-bit address calculations in it. Here's an example with three expandable longword arrays.

;Edit this section to add or expand arrays
ARRAY1_LEN = 500
ARRAY2_LEN = 2000
ARRAY3_LEN = 4280

;Calculate address of arrays

;Create octa-word aligned global shared data section
;Access arrays through these pointers

;Actual read/write data start here


Modify your application (once only) to access the arrays via the vectors pointing to them at the beginning of the sharable image.
I have one, but it's personal.
Honored Contributor

Re: shareable image with dynamic data

There's a classic mistake in all that Macro32 code. One I made when I first got into this, too. There's no header information, and no structure version. Which usually means absolute mayhem when there's a cross-version access into the data.

Given my choice of designs here, I'd likely chuck the whole COMMON design right out the airlock (having been burned by or having encountered folks lured onto the rocks that siren more times than I care to admit) and use an RMS file with global buffers enabled. That deals with all the cruft for you, including the interlocking. Fast. Effective. Clustered. And it works.

This if I didn't have access to entry-level or add-on features from other platforms; a distributed database, a scalable file system, grid services, etc.
John Gillings
Honored Contributor

Re: shareable image with dynamic data

I go one step further than Hoff.. Start by designing an abstract API to access your data. It may be as simple as GET and PUT routines with an index value or key. Implement that using RMS files and get your application working using the API.

If your application performs acceptably, you're done! If not, you can implement a different mechanism, under the same API which exploits idiosyncrasies of your data to improve performance. Maybe using a global section, maybe a writeable COMMON block, whatever works. BUT, you don't expose the details of the data structure to the application. Therefore you can change it without having to change the application.

A crucible of informative mistakes
John McL
Trusted Contributor

Re: shareable image with dynamic data

If I understand you correctly the data area might expand and you want all images that access the data to be able to subsequently and dynamically access the expansion.

FWIW, I see two choices
(a) use global sections and dynamically have the processes either unmap an old section and map a new expanded replacement, or use multiple sections and code in a way that steps through all existing sections. Putting some control data at the start of the first (or only) section is recommended although conceivably you could notify the processes of data expansion via locks and the lock data block give the new data count. Size your global sections generously and you'll avoid too much re-mapping. Your code for handling the data access and remapping would sensibly be in a shared image.

(b) use a server process that can provide the data to the other processes via mailboxes or similar and have only the server process handle the expansion of the data. The server process should also handle incoming data because it needs to handle the data expansion and keep track of the addresses in memory. Performance might be an issue here especially with queueing to request or supply data.