1753871 Members
7316 Online
108809 Solutions
New Discussion юеВ

Re: remote mount

 
adarsh_4
Frequent Advisor

remote mount

i have 2 machines on cluster, one is known as penwb3 and the other is penwb4. when i do a show dev d on penwb3, i see that the system disk ($2$DKA100) of penwb4 is shown as "remote mount" on penwb3.

when i log into penwb4 and do a sh dev d, i can see that the system disk of penwb4 ($2$DKA100) is mounted nicely.

from what i have known from one of our old staff is that they normally type this command and it works.

mount/cluster $2$DKA100: ALPHASYSB ALPHASYSB

ALPHASYSB is the volume name of the system disk of penwb4 ($2$dka100:)

can someone explain why this happens and how is the volume name of ALPHASYSB is given to the disk, is it given during loading of vms ?


3 REPLIES 3
Steven Schweda
Honored Contributor

Re: remote mount

> [...] when i do a show dev d on penwb3,
> i see [...]

You know, that copy+paste stuff would let us
see what you see, so we wouldn't need to rely
on your (incomplete) description.

> can someone explain why this happens [...]

Why _what_ happens? Why the MOUNT command
works? Why someone would use this MOUNT
command?

HELP MOUNT

> [...] how is the volume name of ALPHASYSB
> is given to the disk [...]

HELP INITIALIZE

HELP SET VOLUME /LABEL

> [...] is it given during loading of vms ?

What does "during loading of vms" mean?
Installation? System start-up? INITIALIZE
sets the volume label when the disk is
initialized, which may happen when VMS is
installed. SET VOLUME /LABEL can change it
at a later time.
Willem Grooters
Honored Contributor

Re: remote mount

INIT will ask for a name (volume label) and writes it to the volume. You MUST specify a valid name.
A volume label can be changed using SET VOLUME, once the disk is mounted privately.

ANY device should have it's own label.

MOUNT's second parameter is the volume name of the device you want to mount and it MUST match the volume label, unless specified otherwise (/OVERRIDE=ID) but that can only be used when the device is mounted privately.

The third parameter ususally equals the second, and it is the logical name by which the device can be accessed.

A drive mounted /SYSTEM can be accessed by any process on that system. In a cluster, /SYSTEM will keep the drive private for all processes on that node.
To share a disk to all users on any system in the cluster, specify /CLUSTER.

In your case, $2$DKA100 is mounted /SYSTEM on node penwb4 - obviously, since it is that node's system disk. specifying MOUNT/CLUSTER of that disk when the system is up and running (typically later on the startup sequenbce, eg. via SYSTARTUP_VMS.COM) causes the disk be mounted on every node - including penWB3. But since the command is given on penwb4, you'll see 'remote mount' on penwb3, and 'mounted' on penwb4.

Willem Grooters
OpenVMS Developer & System Manager
The Brit
Honored Contributor

Re: remote mount

Adarsh,
System disks do not normally mount cluster-wide at boot-time. For example, I run a mixed archetecture cluster (Itanium and Alpha), however I use the cluster common files on the Itanium System Disk (Queue Manager, SysUAF, RightsList, etc.).

In order to do this, I have to mount the Itanium System Disk as part of the Alpha startup. I do this in SyLogicals.com, which is where I also define the logical locations for the shared files.

I do not however mount the Alpha system disk on the Itanium machines (I have no reason to), so on the ALPHA's I see both system disks mounted, whereas on the Itaniums, the Alpha system disk shows up as "Remote Mount"

Note. If you want to mount a disk as part of the startup, it is only necessary to do a
"Mount/System" on the system which need it. If you are doing it after the cluster has booted and you are mounting on multiple systems, the "Mount/Cluster" is appropriate.

I am assuming that there are valid reasons why you are using multiple system disks within the cluster, (and there are several valid reasons). From your post it is not possible to tell if you are running "mixed archetecture", Internal Disks, or some other configuration. I even had one cluster where One system disk was production, and one was development.

However if you dont have a good reason for running separate system disks, you might want to investigate that line of research further.

Hope this helps.

Dave.