1748184 Members
4009 Online
108759 Solutions
New Discussion юеВ

Re: VMS cluster

 
SOLVED
Go to solution
Willem Grooters
Honored Contributor

Re: VMS cluster

Follow recommendations by others: read, and _fully_understand_ the documentation.

If you are running a cluster you'll need to take precaustions to prevent problems like hang of a surviving node if one node stops and split clusters... How to provent this happening is described in the documentation.
Running a 2-node cluster ahs some problems in itself, but all is covered in the documentation.

Now your current scheme:

You now have a "cluster" of 1 node: GREEN. To add BLUE, check the MDOPARAMS.DAT file on GREEN to see what clsuter_config has addded here. Copy this data to MODPARAMS.DAT on BLUE, chnage what's required (I don't think there is anything to change but I cannot check on my nodes), run AUTOGEN to create new configuration, copy GREEN's CLUSTER_AUTHORIZE.DAT to BLUE and reboot.

Do the same with RED and YELLOW, or whatever names you gave to the other systems you want to add.

That _should_ be it.

If you boot GREEN by itself, it will take some time before it decides to form a cluster. If you boot BLUE and everything is proeprly setup, forming a cluster with GREEN will be signalled almost immediately.
Any other system will have the same behaviour.

Willem Grooters
OpenVMS Developer & System Manager
Mulder_1
Frequent Advisor

Re: VMS cluster

Thanks.

What I did was, taken the image backup of green and restored it in blue.

Shutdown green and changed the IP,Decnet address etc.


Booted up blue and then green.

While booting up it waits to form/join the cluster but both are booted up with 2 seperate cluster.

Note : the networks are connected by an unmanaged switch for testing.

Could this be the reason ?
Please suggest

Thanks


Joseph Huber_1
Honored Contributor

Re: VMS cluster

No, if the switch works, then this is not the reason for the cluster separation.
test it by accessing one of the nodes from the other (set host for decnet, set h/lat for lat, whatever else is setup for non-routed networking).
If nothing is working, then no direct ethernet path exists, and the cluster problem is just a consequence.

Otherwise, if the direct network connection is o.k. : did You do a change in modparams of the SCSSYSTEMID and SCSNODE parameters, and a following autogen ?

And finally: since You initialized blue's systemdisk, I would go a safer way:
first add blue to green as a cluster-member. MOP boot blue from green, so You know the cluster is correctly established and tested.
Finally make an image-backup from green system disk to blue, then boot blue from this cloned disk (don't forget to use the correct root in SRM boot -flag !).

This way You always can boot one system from the others disk in case of disk errors.
http://www.mpp.mpg.de/~huber
Hoff
Honored Contributor
Solution

Re: VMS cluster

The use of an unmanaged switch is not relevant here.

What you did do here, in very simplest terms, was expose your disk data to very severe corruption.

This was mentioned earlier, and I'll mention it again:

mess up a cluster configuration, mess up your disk data.

You can pay in time spent reading the manuals and the associated time spent learning, pay in terms of time spent learning through failure and potentially spent unsnarling and restoring disks, pay for formal classroom training, or pay in terms of enlisting more experienced help to set this cluster up for you.

Your data, your choice, of course.