- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Characteristics of a Shared Disk Device
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2008 09:29 AM
тАО06-17-2008 09:29 AM
Re: OpenVMS V8.3
When one performs a $SHOW DEV/FULL on a disk device mounted as /CLUSTER or as /SYSTEM (on a list of cluster members), there is a line item that displays the list of cluster nodes that have the device's volume [also] mounted.
I am looking to obtain that same line item information by either a direct or indirect use of a DCL lexical (F$GETDVI, F$DEVICE, etc) or a system service ($GETDVI, $DEVICE_SCAN, etc).
I'd be looking to achieve this information whether by using DCL or by Fortran-77/Fortran-90.
In short, the scenario is as follows:
- Disk volume mounted /SYS on Node1
- Then, disk is attempted to be mounted
private on Node2
- The private mount request fails as the
the device is already mounted.
I'd be looking to test that device/volume before the $MOUNT is tried. The "MNT" and "AVL" arguments for F$GETDVI show that it is okay to mount.
I am aware that I can test the returned error status from the MOUNT....but, I am looking for a cleaner way if at all possible.
Many thanks for any assistance.
-H-
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2008 10:34 AM
тАО06-17-2008 10:34 AM
SolutionThis was added for V8.3-1H1, and backported all the way back to V7.3-2. You'd need both a SYS and DCL kit in order to use this from DCL. The backporting work is only for the SYS$ and F$ variants; the LIB$ form does not include this new item code.
For use from DCL, you simply use the item code as normal. For use from a real language, you'll need to locally define the value of DVI$_MOUNTCNT_CLUSTER to be 494 (decimal).
Normally, this value is defined in $DVIDEF in the language-specific STARLET libraries.
-- Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2008 10:39 AM
тАО06-17-2008 10:39 AM
Re: Characteristics of a Shared Disk Device
Thank you.
Thank you.
That should do it for me.
I will leave this thread open for a day or so...just to see if some other items are posted.
In the mean time, I will be requesting the mentioned kits.
-H-
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2008 10:58 AM
тАО06-17-2008 10:58 AM
Re: Characteristics of a Shared Disk Device
It's perfectly feasible to have a disk mounted more than once and entirely in parallel; multiple mounts are fully and entirely supported, so long as the mount operations are compatible. And this can be quite useful, too, as it is a technique that avoids allowing a disk from being dismounted from underneath an application.
Put another way, it can be quite possible to MOUNT a disk that has a non-zero mount count, depending on the MOUNT command(s) involved.
I'd suggest you simply continue to use the $MOUNT, and catch the errors. That's the cleanest way.
You're probably going to want to continue to have the error recovery on the sys$mount call in any case and even with Rob's suggested (and useful) mount count itemcode, as it's entirely possible for the disk to change its MOUNT or allocation status entirely between the test and the sys$mount.
(OpenVMS lacks a logical volume manager or analogous, which is the sort of thing that other systems tend to use here to manage volumes in a distributed environment.)
Stephen Hoffman
HoffmanLabs LLC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-17-2008 11:22 AM
тАО06-17-2008 11:22 AM
Re: Characteristics of a Shared Disk Device
Yes, I would agree, but I found one "gotcha" in that mix.
During a system boot, this same test for availability also needs to occur. (Note that I am considering alternative methods...especially if this gets too messy.) That same test would occur and $MOUNT the device privately. If multiple members of the cluster are booted simulteneously, the potential for concurrent private MOUNTs can occur. (The private mount occurs before the /SYSTEM mount is done.) When such does occur, and one node is not done with its "private" work on the device, this results in an OPCOM request that holds the execution until the other node is done. This teetering back and forth significantly delays the boot process.
I do have ways around this and am truly considering them. One is to SYNCH the two boot process's work on that disk...another is to not even bother to mount it privately and just store the information I need elsewhere in an indexed file...or a cluster-rooted logical name table.
It gets involved and is very much a long story...but I did have a reason for doing that private mount thing prior to the public mount...a rather good reason, but one that may have to giveway to practicality. That private mount really only needs to occur once, on any one of the nodes...hence my test for the "at least one other node" having it mounted...if it is mounted publicly, I know that the private stuff was done already.
In any case, I do appreciate your latest suggestion. It will most certainly be considered as I search for a reasonable way to handle my situation. (I may not be able to get that previously mentioned kit installed onto a production system in enough time for it to be useful to me....but I will be trying.)
Many thanks.
-H-
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-19-2008 04:15 AM
тАО06-19-2008 04:15 AM
Re: Characteristics of a Shared Disk Device
-H-