- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- New 2 node Cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 12:15 PM
тАО04-23-2006 12:15 PM
connected through FibreChannel 2/8 SANswitch EL's. Both systems have 2 GB
HBA's. There are Gigabit Ethernet cards in each system for cluster
communication...no memory channel. I have looked every where for step by step instructions on how to do this - but can't seem to find anything. I am planning on going to VMS 7.3-2. Any help would be appreciate.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 12:38 PM
тАО04-23-2006 12:38 PM
SolutionActually, you bring up one of the systems, and then create the system root for the other one on the same system disk. Then you are ready to boot the other node off of the alternate system root.
Note, in a two node cluster, you will either need to define one machine as a "Must Have", or you will need to use a quorum disk.
To be fully supported at this point, you probably want to go to 8.2 (8.3 is already in Field Test).
If the nodes can not see one another, you may be dealing with a network problem. For starters, try connecting the two systems together with a reversal cable (no switches or other components to create complexity).
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 08:40 PM
тАО04-23-2006 08:40 PM
Re: New 2 node Cluster
be sure to read the cluster manual 1st.
Then install OpenVMS on one of the systems and run the CLUSTER_CONFIG procedure afterwards.
It will convert the current node to a cluster system. After that you use the same procedure to create an additional root for the 2nd system.
Be sure to set the boot values on the 2nd system to boot from the 2nd root.
regards Kalle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 08:41 PM
тАО04-23-2006 08:41 PM
Re: New 2 node Cluster
regards Kalle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 09:18 PM
тАО04-23-2006 09:18 PM
Re: New 2 node Cluster
Before you start making cluster, you must plan and prepare everything.
In you case, first read manuals, then prepare SAN disks, test connections to SAN and check network infrastructure.
Gigabit Ethernet is OK for cluster interconnect.
I would use a quorum disk.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 05:01 AM
тАО04-24-2006 05:01 AM
Re: New 2 node Cluster
The answer depends on your business requirements. You could consolidate these 4 systems to use 1 system disk, you could configure each of ES-45's to boot from one of the current systems disks each, you could create a new systems disk for each system. Do you really need or require three systems disks?
Since you have a current cluster, you need to have the cluster number and cluster password, these are stored in SYS$SYSTEM:CLUSTER_AUTHORIZE.DAT, you can copy this file from your existing cluster to the new systems disks. I don't believe you can recover the cluster password/number combination if you don'
You'll also need to consider quorum with the change in nodes. Your business requirements will drive this. A typical configuration is to configure each node with 1 vote.
Availablity Manager is good to have set up. http://h71000.www7.hp.com/openvms/products/availman/ I have it running at most sites, and at all clusters.
The cluster documentation mighe be slightly over whelming but it complete.
"There are nine and sixty ways of constructing tribal lays, And every single one of them right"
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 05:03 AM
тАО04-24-2006 05:03 AM
Re: New 2 node Cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 05:10 AM
тАО04-24-2006 05:10 AM
Re: New 2 node Cluster
then you may just boot one of the ES45's from an old systemdisk and create the additonal root for the 2nd ES45 using CLUSTER_CONFIG.COM.
regards Kalle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 05:24 AM
тАО04-24-2006 05:24 AM
Re: New 2 node Cluster
The easiest path would be pick one of the ES-40 system disks, add two additional roots. Boot the ES-45s from the new roots. This gives you all four nodes running 3 from system disk "A" and the remaining ES-40 from system disk "B." Test, tune and burn in your new systems. Reconfigure quorum, turn off the ES-40s.
If your environment doesn't allow this, then install OpenVMS on 1 ES-45 and use SYS$MANAGER:CLUSTER_CONFIG to set up the cluster and additional boot root. Again your business requirements will drive the technical choices.
The same thoughts about quorum apply. You'll probably want to reconfigure before
OpenVMS 7.3-2 will remain in extended support. While are good arguements for staying current with the operating system, as the final version 7, this release will stay under supported for an extended period of time. See http://h71000.www7.hp.com/openvms/roadmap/openvms_roadmaps.htm
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 08:10 AM
тАО04-24-2006 08:10 AM
Re: New 2 node Cluster
being a big supporter of cluster continuity, I can do nothing else but support Andy's last entry.
That way you have your current config continuing running, while validating the new systems.
Then __JUST__ migrate the activity to the other node(s), still using the same disks.
_IF_ you also are changing storage, then moving THAT is now a separate activity.
In the end, _more_ activities, but each separate one, of _LESSER_ complexity.
Summary: less, and more spread, risks.
And I consider that the _BIG_ win in this scenario.
hth.
Proost.
Have one on me (maybe in May in Nashua?)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2006 08:34 AM
тАО04-25-2006 08:34 AM