- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: remote mount
Operating System - OpenVMS
1753871
Members
7316
Online
108809
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-10-2009 07:25 PM
тАО08-10-2009 07:25 PM
remote mount
i have 2 machines on cluster, one is known as penwb3 and the other is penwb4. when i do a show dev d on penwb3, i see that the system disk ($2$DKA100) of penwb4 is shown as "remote mount" on penwb3.
when i log into penwb4 and do a sh dev d, i can see that the system disk of penwb4 ($2$DKA100) is mounted nicely.
from what i have known from one of our old staff is that they normally type this command and it works.
mount/cluster $2$DKA100: ALPHASYSB ALPHASYSB
ALPHASYSB is the volume name of the system disk of penwb4 ($2$dka100:)
can someone explain why this happens and how is the volume name of ALPHASYSB is given to the disk, is it given during loading of vms ?
when i log into penwb4 and do a sh dev d, i can see that the system disk of penwb4 ($2$DKA100) is mounted nicely.
from what i have known from one of our old staff is that they normally type this command and it works.
mount/cluster $2$DKA100: ALPHASYSB ALPHASYSB
ALPHASYSB is the volume name of the system disk of penwb4 ($2$dka100:)
can someone explain why this happens and how is the volume name of ALPHASYSB is given to the disk, is it given during loading of vms ?
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-10-2009 08:26 PM
тАО08-10-2009 08:26 PM
Re: remote mount
> [...] when i do a show dev d on penwb3,
> i see [...]
You know, that copy+paste stuff would let us
see what you see, so we wouldn't need to rely
on your (incomplete) description.
> can someone explain why this happens [...]
Why _what_ happens? Why the MOUNT command
works? Why someone would use this MOUNT
command?
HELP MOUNT
> [...] how is the volume name of ALPHASYSB
> is given to the disk [...]
HELP INITIALIZE
HELP SET VOLUME /LABEL
> [...] is it given during loading of vms ?
What does "during loading of vms" mean?
Installation? System start-up? INITIALIZE
sets the volume label when the disk is
initialized, which may happen when VMS is
installed. SET VOLUME /LABEL can change it
at a later time.
> i see [...]
You know, that copy+paste stuff would let us
see what you see, so we wouldn't need to rely
on your (incomplete) description.
> can someone explain why this happens [...]
Why _what_ happens? Why the MOUNT command
works? Why someone would use this MOUNT
command?
HELP MOUNT
> [...] how is the volume name of ALPHASYSB
> is given to the disk [...]
HELP INITIALIZE
HELP SET VOLUME /LABEL
> [...] is it given during loading of vms ?
What does "during loading of vms" mean?
Installation? System start-up? INITIALIZE
sets the volume label when the disk is
initialized, which may happen when VMS is
installed. SET VOLUME /LABEL can change it
at a later time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-11-2009 01:53 AM
тАО08-11-2009 01:53 AM
Re: remote mount
INIT will ask for a name (volume label) and writes it to the volume. You MUST specify a valid name.
A volume label can be changed using SET VOLUME, once the disk is mounted privately.
ANY device should have it's own label.
MOUNT's second parameter is the volume name of the device you want to mount and it MUST match the volume label, unless specified otherwise (/OVERRIDE=ID) but that can only be used when the device is mounted privately.
The third parameter ususally equals the second, and it is the logical name by which the device can be accessed.
A drive mounted /SYSTEM can be accessed by any process on that system. In a cluster, /SYSTEM will keep the drive private for all processes on that node.
To share a disk to all users on any system in the cluster, specify /CLUSTER.
In your case, $2$DKA100 is mounted /SYSTEM on node penwb4 - obviously, since it is that node's system disk. specifying MOUNT/CLUSTER of that disk when the system is up and running (typically later on the startup sequenbce, eg. via SYSTARTUP_VMS.COM) causes the disk be mounted on every node - including penWB3. But since the command is given on penwb4, you'll see 'remote mount' on penwb3, and 'mounted' on penwb4.
A volume label can be changed using SET VOLUME, once the disk is mounted privately.
ANY device should have it's own label.
MOUNT's second parameter is the volume name of the device you want to mount and it MUST match the volume label, unless specified otherwise (/OVERRIDE=ID) but that can only be used when the device is mounted privately.
The third parameter ususally equals the second, and it is the logical name by which the device can be accessed.
A drive mounted /SYSTEM can be accessed by any process on that system. In a cluster, /SYSTEM will keep the drive private for all processes on that node.
To share a disk to all users on any system in the cluster, specify /CLUSTER.
In your case, $2$DKA100 is mounted /SYSTEM on node penwb4 - obviously, since it is that node's system disk. specifying MOUNT/CLUSTER of that disk when the system is up and running (typically later on the startup sequenbce, eg. via SYSTARTUP_VMS.COM) causes the disk be mounted on every node - including penWB3. But since the command is given on penwb4, you'll see 'remote mount' on penwb3, and 'mounted' on penwb4.
Willem Grooters
OpenVMS Developer & System Manager
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-11-2009 03:45 AM
тАО08-11-2009 03:45 AM
Re: remote mount
Adarsh,
System disks do not normally mount cluster-wide at boot-time. For example, I run a mixed archetecture cluster (Itanium and Alpha), however I use the cluster common files on the Itanium System Disk (Queue Manager, SysUAF, RightsList, etc.).
In order to do this, I have to mount the Itanium System Disk as part of the Alpha startup. I do this in SyLogicals.com, which is where I also define the logical locations for the shared files.
I do not however mount the Alpha system disk on the Itanium machines (I have no reason to), so on the ALPHA's I see both system disks mounted, whereas on the Itaniums, the Alpha system disk shows up as "Remote Mount"
Note. If you want to mount a disk as part of the startup, it is only necessary to do a
"Mount/System" on the system which need it. If you are doing it after the cluster has booted and you are mounting on multiple systems, the "Mount/Cluster" is appropriate.
I am assuming that there are valid reasons why you are using multiple system disks within the cluster, (and there are several valid reasons). From your post it is not possible to tell if you are running "mixed archetecture", Internal Disks, or some other configuration. I even had one cluster where One system disk was production, and one was development.
However if you dont have a good reason for running separate system disks, you might want to investigate that line of research further.
Hope this helps.
Dave.
System disks do not normally mount cluster-wide at boot-time. For example, I run a mixed archetecture cluster (Itanium and Alpha), however I use the cluster common files on the Itanium System Disk (Queue Manager, SysUAF, RightsList, etc.).
In order to do this, I have to mount the Itanium System Disk as part of the Alpha startup. I do this in SyLogicals.com, which is where I also define the logical locations for the shared files.
I do not however mount the Alpha system disk on the Itanium machines (I have no reason to), so on the ALPHA's I see both system disks mounted, whereas on the Itaniums, the Alpha system disk shows up as "Remote Mount"
Note. If you want to mount a disk as part of the startup, it is only necessary to do a
"Mount/System" on the system which need it. If you are doing it after the cluster has booted and you are mounting on multiple systems, the "Mount/Cluster" is appropriate.
I am assuming that there are valid reasons why you are using multiple system disks within the cluster, (and there are several valid reasons). From your post it is not possible to tell if you are running "mixed archetecture", Internal Disks, or some other configuration. I even had one cluster where One system disk was production, and one was development.
However if you dont have a good reason for running separate system disks, you might want to investigate that line of research further.
Hope this helps.
Dave.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP