1753835 Members
8105 Online
108806 Solutions
New Discussion юеВ

New 2 node Cluster

 
SOLVED
Go to solution
LM_2
Frequent Advisor

New 2 node Cluster

I currently have two ES40's clustered together and each has their own system disks. I am currently setting up an ES45 cluster - and would like to have the two 45's share the same system disk. Does anyone know the proper procedure to follow to do this? I looked in the installation books and I can not find it - it's really generic. Do I need to install VMS on each 45 and then bring one of the systems up and run cluster_config - create a new system disk - and then join the two? I have tried this several times and I can't get the two nodes to see each other - These are
connected through FibreChannel 2/8 SANswitch EL's. Both systems have 2 GB
HBA's. There are Gigabit Ethernet cards in each system for cluster
communication...no memory channel. I have looked every where for step by step instructions on how to do this - but can't seem to find anything. I am planning on going to VMS 7.3-2. Any help would be appreciate.
10 REPLIES 10
Robert Gezelter
Honored Contributor
Solution

Re: New 2 node Cluster

LM,

Actually, you bring up one of the systems, and then create the system root for the other one on the same system disk. Then you are ready to boot the other node off of the alternate system root.

Note, in a two node cluster, you will either need to define one machine as a "Must Have", or you will need to use a quorum disk.

To be fully supported at this point, you probably want to go to 8.2 (8.3 is already in Field Test).

If the nodes can not see one another, you may be dealing with a network problem. For starters, try connecting the two systems together with a reversal cable (no switches or other components to create complexity).

- Bob Gezelter, http://www.rlgsc.com
Karl Rohwedder
Honored Contributor

Re: New 2 node Cluster

LM,

be sure to read the cluster manual 1st.

Then install OpenVMS on one of the systems and run the CLUSTER_CONFIG procedure afterwards.
It will convert the current node to a cluster system. After that you use the same procedure to create an additional root for the 2nd system.

Be sure to set the boot values on the 2nd system to boot from the 2nd root.

regards Kalle
Karl Rohwedder
Honored Contributor

Re: New 2 node Cluster

Just to support Roberts advice: If your applications allow, I would go for V8.2 also.

regards Kalle
Vladimir Fabecic
Honored Contributor

Re: New 2 node Cluster

Just one thing to add.
Before you start making cluster, you must plan and prepare everything.
In you case, first read manuals, then prepare SAN disks, test connections to SAN and check network infrastructure.
Gigabit Ethernet is OK for cluster interconnect.
I would use a quorum disk.
In vino veritas, in VMS cluster
Andy Bustamante
Honored Contributor

Re: New 2 node Cluster


The answer depends on your business requirements. You could consolidate these 4 systems to use 1 system disk, you could configure each of ES-45's to boot from one of the current systems disks each, you could create a new systems disk for each system. Do you really need or require three systems disks?

Since you have a current cluster, you need to have the cluster number and cluster password, these are stored in SYS$SYSTEM:CLUSTER_AUTHORIZE.DAT, you can copy this file from your existing cluster to the new systems disks. I don't believe you can recover the cluster password/number combination if you don'

You'll also need to consider quorum with the change in nodes. Your business requirements will drive this. A typical configuration is to configure each node with 1 vote.

Availablity Manager is good to have set up. http://h71000.www7.hp.com/openvms/products/availman/ I have it running at most sites, and at all clusters.

The cluster documentation mighe be slightly over whelming but it complete.

"There are nine and sixty ways of constructing tribal lays, And every single one of them right"


Andy
If you don't have time to do it right, when will you have time to do it over? Reach me at first_name + "." + last_name at sysmanager net
LM_2
Frequent Advisor

Re: New 2 node Cluster

Andy - I will not have four systems clustered together - the two ES45's will replace the ES40's.
Karl Rohwedder
Honored Contributor

Re: New 2 node Cluster

Ahhh,

then you may just boot one of the ES45's from an old systemdisk and create the additonal root for the 2nd ES45 using CLUSTER_CONFIG.COM.

regards Kalle
Andy Bustamante
Honored Contributor

Re: New 2 node Cluster


The easiest path would be pick one of the ES-40 system disks, add two additional roots. Boot the ES-45s from the new roots. This gives you all four nodes running 3 from system disk "A" and the remaining ES-40 from system disk "B." Test, tune and burn in your new systems. Reconfigure quorum, turn off the ES-40s.

If your environment doesn't allow this, then install OpenVMS on 1 ES-45 and use SYS$MANAGER:CLUSTER_CONFIG to set up the cluster and additional boot root. Again your business requirements will drive the technical choices.

The same thoughts about quorum apply. You'll probably want to reconfigure before

OpenVMS 7.3-2 will remain in extended support. While are good arguements for staying current with the operating system, as the final version 7, this release will stay under supported for an extended period of time. See http://h71000.www7.hp.com/openvms/roadmap/openvms_roadmaps.htm


Andy
If you don't have time to do it right, when will you have time to do it over? Reach me at first_name + "." + last_name at sysmanager net
Jan van den Ende
Honored Contributor

Re: New 2 node Cluster

LM,

being a big supporter of cluster continuity, I can do nothing else but support Andy's last entry.
That way you have your current config continuing running, while validating the new systems.
Then __JUST__ migrate the activity to the other node(s), still using the same disks.
_IF_ you also are changing storage, then moving THAT is now a separate activity.
In the end, _more_ activities, but each separate one, of _LESSER_ complexity.
Summary: less, and more spread, risks.
And I consider that the _BIG_ win in this scenario.

hth.

Proost.

Have one on me (maybe in May in Nashua?)
Don't rust yours pelled jacker to fine doll missed aches.