- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Dynamic Choice of VMS System to Boot As...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 04:37 AM
09-16-2005 04:37 AM
I a interested if it is possible to build a common cluster system disk (on a SAN) between a pair of AlphaServers with NODEA and NODEB setup as the two node names.
IF NODEA has a hardware failure, I want to be able to reboot NODEB and have it come up as if it was NODEA.
I am building a new pair of ES40s to replace the previous nodes and have the chance to use a proper common system disk so I only have to manage one setup, etc. But I don't know how to tell it that NODEB should be treated as "SYS0" instead of "SYS1" dynamically at the boot prompt...
NOTE: The original NODEA would not be allowed to return unless it is manually made to become NODEB.
Believe me, I know this is not the way OpenVMS Disaster-tolerant clustering was designed to work! This is to model a failover method in use today where each node has a discrete internal boot disk(s) that in the event of a H/W failure, the system disks of NODEA are physically swapped with the disks of NODEB and rebooted. The application only runs on NODEA.
It's sad, but NODEB is really just a hot spare for NODEA. I use it for maint, backups, admin, (SETI), etc. all the time while the application runs on NODEA. It has a lot of spare CPU cycles.
Rick
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 04:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 05:06 AM
09-16-2005 05:06 AM
Re: Dynamic Choice of VMS System to Boot As...
Ooooh! This is looking up. I will be able to test this soon when the boxes get here. I am looking forward to it!
Now I just have to learn how to boot from a SAN disk. :) I have some notes on WWIDMgr that I see talk about it. Need to read it now. :) Have had one here for years, but the boxes always booted from their internal SCSI bus and I wanted to go with SAN if I could but I had drawn a blank on how to keep the ability to fake the same system booting no matter what the hardware. I was afraid I was going to have to use up 4 slots on my SAN to have two shadow pairs for separate system disks...
Thanks Uwe!
I wish I could buy you a cold one!!! But, do have a couple for me! Some day I will get to attend a meeting with some of you guys!
Rick
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 05:16 AM
09-16-2005 05:16 AM
Re: Dynamic Choice of VMS System to Boot As...
>>> SET BOOT_OSFLAGS 1,0
to booting from root 0
>>> SET BOOT_OSFLAGS 0,0
If you do this be very careful to define proceedures to avoid starting both systems from the same root at the same time. Switching nodes is a very simple way to drop in backup hardware, especially with the same model Alpha.
You can use f$getenv to see how
$ this_root = f$getenv("boot_osflags")
$ show sym this_root
How does the application communicate with your users? You can use TCPIP aliasing to have a service address that migrates from node to node.
DECnet and LAT also have options for presenting a cluster address. This allows both nodes to be running and speeds the application transition from node to node.
Having both nodes configured to support your application makes remote management easier and reduces application transition time.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 05:19 AM
09-16-2005 05:19 AM
Re: Dynamic Choice of VMS System to Boot As...
You can start all over from scratch:
>>> wwidmgr -clear all
Look for the devices:
>>> wwidmgr -show wwid
>>> wwidmgr -show wwid -full
...
Configure the device paths
>>> wwidmgr -quickset -udid 599
where 599 is the unit identifier. The trick is to INITIALIZE the system at this stage, so that you later can boot from it.
>>> initialize
...
You should now see the configured paths:
>>> show device
...
dga599.1001.0.2.1 $1$DGA599 HP HSV100 3014
dgb599.1002.0.2.3 $1$DGA599 HP HSV100 3014
...
>>> set bootdef_dev dga599.1001.0.2.1,dgb599.1002.0.2.3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 05:27 AM
09-16-2005 05:27 AM
Re: Dynamic Choice of VMS System to Boot As...
This old app is a VT one. It does not really support cluster roll over. It has stuff that is tied to nodenames in the queue manager and the startup scripts. It is old and end-of-life from vendor, etc.
This trick used here looks like it will work for me just great. Sort of remote control disk swapping from my kitchen table at the console. :)
Uwe: Thanks for the quick wwidmgr stuff! I just wish I HAD an HSV you use for your example! Those look sweet. Still got (but it is still good too!) two pairs of HSG80s and SAN switch pairs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 05:34 AM
09-16-2005 05:34 AM
Re: Dynamic Choice of VMS System to Boot As...
>>>show device dg
dga101.1001.0.9.2 $1$DGA101 COMPAQ MSA1000 VOLUME 4.32
dga102.1001.0.9.2 $1$DGA102 COMPAQ MSA1000 VOLUME 4.32
dgb101.1002.0.10.2 $1$DGA101 COMPAQ MSA1000 VOLUME 4.32
dgb102.1002.0.10.2 $1$DGA102 COMPAQ MSA1000 VOLUME 4.32
>>>set bootdef_dev dga101.1001.0.9.2,dgb101.1002.0.10.2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 05:41 AM
09-16-2005 05:41 AM
Re: Dynamic Choice of VMS System to Boot As...
I just meant it would be nice to have one of the newer HSV arrays instead of the older, slower HSG boxes. :) Not that there is anything wrong with them! They beat a HSZ20 and RA230+ I have elsewhere or at home...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 07:25 AM
09-16-2005 07:25 AM
Re: Dynamic Choice of VMS System to Boot As...
It really _IS_ that simple, especially if your node hardware is identical.
The node identity is NOT the hardware, but the software that it is booted from.
(and yes, the hardware and the software must be totally interchangable to do it without adjustments)
But if:
the systems
- are the same hardware type
- have the same number & type of CPU
- have the same amount of memory
- have the same type of Network cards
- have NO direct-attached terminal nor other periferals
the it just is swapping the boot roots, and the systems ARE swapped!
Including (DECnet & IP) network adresses, and EVERYthing (except the serial numbers in SHOW CPU).
Been there, done that.
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 08:30 AM
09-16-2005 08:30 AM
Re: Dynamic Choice of VMS System to Boot As...
I do this now with the independent disks and my feet. (true sneakernet!) But being able to connect to the console from home (when does unexpected downtime happen 9-5 M-F?!?), change the boot command and get it going will allow up to an hour faster response as compared to getting in by car and so forth.
Thanks All!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2005 11:29 PM
09-16-2005 11:29 PM
Re: Dynamic Choice of VMS System to Boot As...
I have done this on several occasions at different sites. It is one of the underappreciated possibilities of OpenVMS clustering, and a very powerful tool.
Cluster nodes are more than boxes, they are virtualizations of roles in an overall configuration.
In many installations, this is presumed to be exactly equal to "which box is which node". Say (Boot Root on Common System Disk in parenthesis):
Production Primary Node ALPHA ES40 (0)
Production Secondary Node BETA ES45 (1)
Development Node CHARLE DS25 (2)
And that works for many sites. However, it is more powerful to use cluster names by function, not hardware box. Using the same notation above:
Production Primary Node ALPHA ES40 (0)
ES45 (1)
DS25 (2)
Production Secondary Node BETA ES40 (10)
ES45 (11)
DS25 (12)
Development Node CHARL ES40 (20)
ES45 (21)
DS25 (22)
I alluded to such a configuration in my article in Volume 3 of the OpenVMS Technical Journal (a reprint of which can be found at my www site at http://www.rlgsc.com/publications/vmstechjournal/inheritance.html or on the HP OpenVMS www site).
The strongest advantage of this approach is that is allows you to treat your hardware as a pool of systems, which run, in effect, virtual nodes. If a situation requires you to alter which physical box is used for each of the roles in your configuration, you are a pre-configured node reboot away for being finished.
Each of the boot roots can have its own set of parameters, and settings, so manual changes to configuration in an emergency are eliminated.
It is, indeed, a spectacularly powerful capability.
- Bob Gezelter, http://www.rlgsc.com
So, the set of cluster Nodes