Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Characteristics of a Shared Disk Device

SOLVED
Go to solution
HDS
Frequent Advisor

Characteristics of a Shared Disk Device

Hello.

Re: OpenVMS V8.3

When one performs a $SHOW DEV/FULL on a disk device mounted as /CLUSTER or as /SYSTEM (on a list of cluster members), there is a line item that displays the list of cluster nodes that have the device's volume [also] mounted.

I am looking to obtain that same line item information by either a direct or indirect use of a DCL lexical (F$GETDVI, F$DEVICE, etc) or a system service ($GETDVI, $DEVICE_SCAN, etc).

I'd be looking to achieve this information whether by using DCL or by Fortran-77/Fortran-90.

In short, the scenario is as follows:
- Disk volume mounted /SYS on Node1
- Then, disk is attempted to be mounted
private on Node2
- The private mount request fails as the
the device is already mounted.

I'd be looking to test that device/volume before the $MOUNT is tried. The "MNT" and "AVL" arguments for F$GETDVI show that it is okay to mount.

I am aware that I can test the returned error status from the MOUNT....but, I am looking for a cleaner way if at all possible.

Many thanks for any assistance.

-H-
5 REPLIES
Robert Brooks_1
Honored Contributor
Solution

Re: Characteristics of a Shared Disk Device

dvi$_mountcnt_cluster is a newly-added $GETDVI item code that will return the number of nodes that have a device mounted.

This was added for V8.3-1H1, and backported all the way back to V7.3-2. You'd need both a SYS and DCL kit in order to use this from DCL. The backporting work is only for the SYS$ and F$ variants; the LIB$ form does not include this new item code.

For use from DCL, you simply use the item code as normal. For use from a real language, you'll need to locally define the value of DVI$_MOUNTCNT_CLUSTER to be 494 (decimal).

Normally, this value is defined in $DVIDEF in the language-specific STARLET libraries.

-- Rob
HDS
Frequent Advisor

Re: Characteristics of a Shared Disk Device

Thank you.
Thank you.
Thank you.

That should do it for me.

I will leave this thread open for a day or so...just to see if some other items are posted.

In the mean time, I will be requesting the mentioned kits.

-H-
Hoff
Honored Contributor

Re: Characteristics of a Shared Disk Device

You're already using what I would consider the cleanest and best approach.

It's perfectly feasible to have a disk mounted more than once and entirely in parallel; multiple mounts are fully and entirely supported, so long as the mount operations are compatible. And this can be quite useful, too, as it is a technique that avoids allowing a disk from being dismounted from underneath an application.

Put another way, it can be quite possible to MOUNT a disk that has a non-zero mount count, depending on the MOUNT command(s) involved.

I'd suggest you simply continue to use the $MOUNT, and catch the errors. That's the cleanest way.

You're probably going to want to continue to have the error recovery on the sys$mount call in any case and even with Rob's suggested (and useful) mount count itemcode, as it's entirely possible for the disk to change its MOUNT or allocation status entirely between the test and the sys$mount.

(OpenVMS lacks a logical volume manager or analogous, which is the sort of thing that other systems tend to use here to manage volumes in a distributed environment.)

Stephen Hoffman
HoffmanLabs LLC

HDS
Frequent Advisor

Re: Characteristics of a Shared Disk Device

Hello.

Yes, I would agree, but I found one "gotcha" in that mix.

During a system boot, this same test for availability also needs to occur. (Note that I am considering alternative methods...especially if this gets too messy.) That same test would occur and $MOUNT the device privately. If multiple members of the cluster are booted simulteneously, the potential for concurrent private MOUNTs can occur. (The private mount occurs before the /SYSTEM mount is done.) When such does occur, and one node is not done with its "private" work on the device, this results in an OPCOM request that holds the execution until the other node is done. This teetering back and forth significantly delays the boot process.

I do have ways around this and am truly considering them. One is to SYNCH the two boot process's work on that disk...another is to not even bother to mount it privately and just store the information I need elsewhere in an indexed file...or a cluster-rooted logical name table.

It gets involved and is very much a long story...but I did have a reason for doing that private mount thing prior to the public mount...a rather good reason, but one that may have to giveway to practicality. That private mount really only needs to occur once, on any one of the nodes...hence my test for the "at least one other node" having it mounted...if it is mounted publicly, I know that the private stuff was done already.

In any case, I do appreciate your latest suggestion. It will most certainly be considered as I search for a reasonable way to handle my situation. (I may not be able to get that previously mentioned kit installed onto a production system in enough time for it to be useful to me....but I will be trying.)

Many thanks.

-H-
HDS
Frequent Advisor

Re: Characteristics of a Shared Disk Device

For now, we are working on obtaining the install kit and have kept the code as-is with the testing of the return status.

-H-