cancel
Showing results for 
Search instead for 
Did you mean: 

VMS cluster

 
SOLVED
Go to solution
Highlighted
Frequent Advisor

VMS cluster

Is it possible to add a node to the existing cluster over LAN without using MOP or network boot.

If so please suggest how?

Note : No fibre or shared SCSI is available

Thanks
13 REPLIES 13
Highlighted
Honored Contributor

Re: VMS cluster

I would say yes, if you add a node that has his own system disk.

But I guess you want a homogenous Cluster...
Highlighted
Honored Contributor

Re: VMS cluster

Why not ?

$ @sys$manager:cluster_config_lan

on the new node, but you'll need to know some data (Cluster_id and -password).
Or set the accoring SYSGEN parameters by hand and copy SYS$SYSTEM:CLUSTER_AUTHORIZATION.DAT to the system-directroy of the nde to add.

Reboot and you're (almost) done.

Be aware that some files should be shared by all nodes, like SYSUAF and RIGHTSLIST. also, the license database should be shared by all nodes.

Willem Grooters
OpenVMS Developer & System Manager
Highlighted
Frequent Advisor

Re: VMS cluster

Thanks for the response.

The servers are having their own system system disk.

Could you please suggest how should I proceed with the addition ?

I have used cluster_config to add the node and entered No where shared scsi or fibre is being asked.

It should not be a satellite node,I guess.

Thanks
Highlighted
Honored Contributor

Re: VMS cluster

Fox,

>>>
The servers are having their own system system disk.
<<<
>>>
It should not be a satellite node,I guess.
<<<

Correct guess.
Satellites --share-- the system disk by booting over the LAN, which you specified NOT to desire.

You need either the cluster ID & password (if you have some way of knowing, OR, before joining the cluster, you need to (network-)copy SYS$COMMON:[SYSEXE]CLUSTER_AUTORISATION.DAT from the existing cluster to the node-to-be-added.

It is wise to also enter the licenses for the new node into the cluster, and then copy the LMF database.
Just before you boot the new node into the cluster, add DEFINE /SYSTEM/EXEC LMF$LICENSE to point to the COMMOM database on the cluster common disk.

Succes.

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Highlighted
Honored Contributor

Re: VMS cluster

(@Jan,
There is no such disk: NO fibre, No shared SCSI (and no SAN, I guess))

So either copy the files (where you will find some trouble in keeping these files - and others - synchronized) or keep them on one system, and refer to these files on the other (which renders the whole cluster inaccessable if that node fails).

You'll encounter more challenges. To name a few:

Disks to be available to all nodes need to be MSCP-serverd. This is one of those SYSGEN parameters set by the procedure (or manually). Refer to the documentation for details. To be able to access these disks on the other nodes, they must be mounter /CLUSTER. Shadowing might be possible but imposes an even larger load on your network.

IF your LAN is heavily used, be aware that the SCS protocol implies a heartbeat and that absence of a reply to that will cause the node to 'disappear' from the cluster. Be sure to set your votes correctly, to prevent a split cluster with all hazard it implies.

Nodes should have their SCS traffic in the same LAN segment - NON_IP! Be aware that more and more, network equipment is IP-ONLY and you will no doubt run into trouble if you're running SCS on such a network.

Be aware SCS traffic is not secure.

The nice way is to separate SCS from 'normal' LAN traffic: have the SCS protocol run over a network of it's own, double, if high availability is a must (which I doubt in this configuration), and use SCACP to use this nowtok only.





Willem Grooters
OpenVMS Developer & System Manager
Highlighted
Frequent Advisor

Re: VMS cluster

Now the scenario has changed a bit :

Let's just say I have 2 stand alone VMS boxes and I want to add the 2 boxes to cluster over LAN.

I ran @cluster_config_lan on one node "Green",chnaged sysgen VAXcluster to 2.

I have copied the file SYS$SYSTEM:CLUSTER_AUTHORIZATION.DAT to the other node "Blue"

When green was booted up,it was a member of the cluster.

What next ?

Thanks
Highlighted
Honored Contributor

Re: VMS cluster

FOX2,

In the last posting, it was indicated that running CLUSTER_CONFIG_LAN on "GREEN", changing the VAXCLUSTER parameter was all that was necessary for the machine to "join the cluster". It was then asked "I have copied CLUSTER_AUTHORIZE to 'BLUE'. What next?"

For the purpose of being a "member", the same steps are needed on "BLUE" as on "GREEN".

What several other posters have alluded to is that a multi-system disk cluster is a bit more complex to run than a single system disk cluster. One has to establish either:
- commonly accessible files for the authorization files (e.g., SYSUAF, RIGHTSLIST, etc.); or
- means for ensuring that the files stay synchronized (remember, actual system and
file protections are based upon UICs, not user names).

This is not a trivial concern. One must ensure that the UAFs of systems joining the cluster do not conflict in their UIC assignments with those already in use on the cluster. The same with RIGHTSLIST identifiers.

Personally, I do this BEFORE I join the node to the cluster. It is a far safer alternative.

One common solution is to establish a small shared volume (MSCP shared if nothing else) and place the UAF and related files (generally including the Queue Manager files) on the shared volume.

A thorough reading of the OpenVMS Guide to Clustering (available from the OpenVMS www site is highly recommended.

- Bob Gezelter, http://www.rlgsc.com
Highlighted
Valued Contributor

Re: VMS cluster

This might not be possible if the distance between the nodes is too far, but if you happened to have a multi-channel Ethernet card with a free slot, you could run a CAT5 cross-over cable between the nodes. If you don't have a free slot in the network card and the distance is too great, that won't work, of course.
Sr. Systems Janitor
Highlighted
Honored Contributor

Re: VMS cluster

Assuming OpenVMS V7.2 or later, first read and then synchronize or then share all of the files referenced in the file SYLOGICALS.TEMPLATE as a start.

And do read the manuals. You're working in the deep end of the proverbial pool here, and mistakes made with settings and configurations of cluster members can and have led to massively corrupted disks.