- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- avoiding ghost disk devices across cluster
Operating System - OpenVMS
1754020
Members
7066
Online
108811
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2006 08:19 AM
тАО03-24-2006 08:19 AM
avoiding ghost disk devices across cluster
Large VMS 7.3-2 cluster. Most systems have their own locally-attached system disk so I have MSCP_SERVE_ALL set so I can access them across the cluster (you can't use SET DEVICE/SERVED with system disks).
In this environment how can I prevent a temporary disk device on one system from being seen by the rest of the cluster?
I know I can stop the CONFIGURE process while I'm using the device, but when I restart CONFIGURE after the device is no longer online it still shows up thru out the rest of cluster.
I have tried SET DEVICE /NOAVAILABLE before restarting CONFIGURE, but that doesn't help. For some reason VMS has never supported SET DEVICE /NOSERVED. There isn't any way I know to "delete" an unused device from the VMS device tables.
Why I want this: We have an EVA SAN and I use snapclones for online backups. On occasion I need to temporarily present a set of snapclones to one VMS node (not cluster wide). I like to present them using device names that logically follow (i.e. if source volume is DGA101 then first backup will be presented as DGA201, next backup DGA301). But this means that if over the course of a few months without a reboot I might have created dozens of DGA units that are no longer online. It's ok that they still show up on that one node, but I really don't want dozens of ghost devices to show up on every node of the cluster.
Or can anyone tell me is there is any downside to not running CONFIGURE at all after boot completes? I noticed that even though CONFIGURE isn't running on node A that if node B reboots it still sees node A's system disk.
In this environment how can I prevent a temporary disk device on one system from being seen by the rest of the cluster?
I know I can stop the CONFIGURE process while I'm using the device, but when I restart CONFIGURE after the device is no longer online it still shows up thru out the rest of cluster.
I have tried SET DEVICE /NOAVAILABLE before restarting CONFIGURE, but that doesn't help. For some reason VMS has never supported SET DEVICE /NOSERVED. There isn't any way I know to "delete" an unused device from the VMS device tables.
Why I want this: We have an EVA SAN and I use snapclones for online backups. On occasion I need to temporarily present a set of snapclones to one VMS node (not cluster wide). I like to present them using device names that logically follow (i.e. if source volume is DGA101 then first backup will be presented as DGA201, next backup DGA301). But this means that if over the course of a few months without a reboot I might have created dozens of DGA units that are no longer online. It's ok that they still show up on that one node, but I really don't want dozens of ghost devices to show up on every node of the cluster.
Or can anyone tell me is there is any downside to not running CONFIGURE at all after boot completes? I noticed that even though CONFIGURE isn't running on node A that if node B reboots it still sees node A's system disk.
I have one, but it's personal.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2006 10:18 AM
тАО03-24-2006 10:18 AM
Re: avoiding ghost disk devices across cluster
I have MSCP_SERVE_ALL = 4 ("serve the system
disk"), and then I have an explicit "SET
DEVICE /SERVED device" command in the
procedure where the other (normal) disks get
mounted. Thus, all the normal disks get
served, but exotic ones do not.
I did this to avoid having a CD-R/RW drive
get served, as I usually leave it powered
off, and endless errors get logged if a
served disk goes away.
I also have a SYS$MANAGER:SYCONFIG.COM, so
that I could configure an old Yamaha CD-R/RW
drive manually, to keep it from appearing
(spuriously) at multiple LUNs. Since the
last round of hardware upgrades, I see now
that this is probably no longer needed, but
while looking around for the info relevant to
this reply, I was reminded that when I first
activated SYS$MANAGER:SYCONFIG.COM, my
CONFIGURE process vanished, due to quirks in
SYS$STARTUP:VMS$DEVICE_STARTUP.COM. Before I
figured out (that is, "was told") what the
problem was, I got severely confused by disks
not being served as expected, dependent on
the boot order of the systems in the cluster.
So, I'd advise against whacking the CONFIGURE
process, unless you're sufficiently smarter
than I was.
disk"), and then I have an explicit "SET
DEVICE /SERVED device" command in the
procedure where the other (normal) disks get
mounted. Thus, all the normal disks get
served, but exotic ones do not.
I did this to avoid having a CD-R/RW drive
get served, as I usually leave it powered
off, and endless errors get logged if a
served disk goes away.
I also have a SYS$MANAGER:SYCONFIG.COM, so
that I could configure an old Yamaha CD-R/RW
drive manually, to keep it from appearing
(spuriously) at multiple LUNs. Since the
last round of hardware upgrades, I see now
that this is probably no longer needed, but
while looking around for the info relevant to
this reply, I was reminded that when I first
activated SYS$MANAGER:SYCONFIG.COM, my
CONFIGURE process vanished, due to quirks in
SYS$STARTUP:VMS$DEVICE_STARTUP.COM. Before I
figured out (that is, "was told") what the
problem was, I got severely confused by disks
not being served as expected, dependent on
the boot order of the systems in the cluster.
So, I'd advise against whacking the CONFIGURE
process, unless you're sufficiently smarter
than I was.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-25-2006 02:21 AM
тАО03-25-2006 02:21 AM
Re: avoiding ghost disk devices across cluster
I wonder if MSCP_SERVE_ALL = 9 would do what you want?
from the cluster manual
"All disks except those with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are served."
from the cluster manual
"All disks except those with allocation classes that differ from the system's allocation class (set by the ALLOCLASS parameter) are served."
____________________
Purely Personal Opinion
Purely Personal Opinion
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP