- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: VMS cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-13-2008 11:50 PM
10-13-2008 11:50 PM
If so please suggest how?
Note : No fibre or shared SCSI is available
Thanks
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2008 12:00 AM
10-14-2008 12:00 AM
Re: VMS cluster
But I guess you want a homogenous Cluster...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2008 12:13 AM
10-14-2008 12:13 AM
Re: VMS cluster
$ @sys$manager:cluster_config_lan
on the new node, but you'll need to know some data (Cluster_id and -password).
Or set the accoring SYSGEN parameters by hand and copy SYS$SYSTEM:CLUSTER_AUTHORIZATION.DAT to the system-directroy of the nde to add.
Reboot and you're (almost) done.
Be aware that some files should be shared by all nodes, like SYSUAF and RIGHTSLIST. also, the license database should be shared by all nodes.
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2008 12:15 AM
10-14-2008 12:15 AM
Re: VMS cluster
The servers are having their own system system disk.
Could you please suggest how should I proceed with the addition ?
I have used cluster_config to add the node and entered No where shared scsi or fibre is being asked.
It should not be a satellite node,I guess.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2008 01:02 AM
10-14-2008 01:02 AM
Re: VMS cluster
>>>
The servers are having their own system system disk.
<<<
>>>
It should not be a satellite node,I guess.
<<<
Correct guess.
Satellites --share-- the system disk by booting over the LAN, which you specified NOT to desire.
You need either the cluster ID & password (if you have some way of knowing, OR, before joining the cluster, you need to (network-)copy SYS$COMMON:[SYSEXE]CLUSTER_AUTORISATION.DAT from the existing cluster to the node-to-be-added.
It is wise to also enter the licenses for the new node into the cluster, and then copy the LMF database.
Just before you boot the new node into the cluster, add DEFINE /SYSTEM/EXEC LMF$LICENSE to point to the COMMOM database on the cluster common disk.
Succes.
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2008 04:17 AM
10-14-2008 04:17 AM
Re: VMS cluster
There is no such disk: NO fibre, No shared SCSI (and no SAN, I guess))
So either copy the files (where you will find some trouble in keeping these files - and others - synchronized) or keep them on one system, and refer to these files on the other (which renders the whole cluster inaccessable if that node fails).
You'll encounter more challenges. To name a few:
Disks to be available to all nodes need to be MSCP-serverd. This is one of those SYSGEN parameters set by the procedure (or manually). Refer to the documentation for details. To be able to access these disks on the other nodes, they must be mounter /CLUSTER. Shadowing might be possible but imposes an even larger load on your network.
IF your LAN is heavily used, be aware that the SCS protocol implies a heartbeat and that absence of a reply to that will cause the node to 'disappear' from the cluster. Be sure to set your votes correctly, to prevent a split cluster with all hazard it implies.
Nodes should have their SCS traffic in the same LAN segment - NON_IP! Be aware that more and more, network equipment is IP-ONLY and you will no doubt run into trouble if you're running SCS on such a network.
Be aware SCS traffic is not secure.
The nice way is to separate SCS from 'normal' LAN traffic: have the SCS protocol run over a network of it's own, double, if high availability is a must (which I doubt in this configuration), and use SCACP to use this nowtok only.
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2008 12:28 AM
10-15-2008 12:28 AM
Re: VMS cluster
Let's just say I have 2 stand alone VMS boxes and I want to add the 2 boxes to cluster over LAN.
I ran @cluster_config_lan on one node "Green",chnaged sysgen VAXcluster to 2.
I have copied the file SYS$SYSTEM:CLUSTER_AUTHORIZATION.DAT to the other node "Blue"
When green was booted up,it was a member of the cluster.
What next ?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2008 01:07 AM
10-15-2008 01:07 AM
Re: VMS cluster
In the last posting, it was indicated that running CLUSTER_CONFIG_LAN on "GREEN", changing the VAXCLUSTER parameter was all that was necessary for the machine to "join the cluster". It was then asked "I have copied CLUSTER_AUTHORIZE to 'BLUE'. What next?"
For the purpose of being a "member", the same steps are needed on "BLUE" as on "GREEN".
What several other posters have alluded to is that a multi-system disk cluster is a bit more complex to run than a single system disk cluster. One has to establish either:
- commonly accessible files for the authorization files (e.g., SYSUAF, RIGHTSLIST, etc.); or
- means for ensuring that the files stay synchronized (remember, actual system and
file protections are based upon UICs, not user names).
This is not a trivial concern. One must ensure that the UAFs of systems joining the cluster do not conflict in their UIC assignments with those already in use on the cluster. The same with RIGHTSLIST identifiers.
Personally, I do this BEFORE I join the node to the cluster. It is a far safer alternative.
One common solution is to establish a small shared volume (MSCP shared if nothing else) and place the UAF and related files (generally including the Queue Manager files) on the shared volume.
A thorough reading of the OpenVMS Guide to Clustering (available from the OpenVMS www site is highly recommended.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2008 09:31 AM
10-15-2008 09:31 AM
Re: VMS cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2008 09:35 AM
10-15-2008 09:35 AM
Re: VMS cluster
And do read the manuals. You're working in the deep end of the proverbial pool here, and mistakes made with settings and configurations of cluster members can and have led to massively corrupted disks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 02:23 AM
10-16-2008 02:23 AM
Re: VMS cluster
If you are running a cluster you'll need to take precaustions to prevent problems like hang of a surviving node if one node stops and split clusters... How to provent this happening is described in the documentation.
Running a 2-node cluster ahs some problems in itself, but all is covered in the documentation.
Now your current scheme:
You now have a "cluster" of 1 node: GREEN. To add BLUE, check the MDOPARAMS.DAT file on GREEN to see what clsuter_config has addded here. Copy this data to MODPARAMS.DAT on BLUE, chnage what's required (I don't think there is anything to change but I cannot check on my nodes), run AUTOGEN to create new configuration, copy GREEN's CLUSTER_AUTHORIZE.DAT to BLUE and reboot.
Do the same with RED and YELLOW, or whatever names you gave to the other systems you want to add.
That _should_ be it.
If you boot GREEN by itself, it will take some time before it decides to form a cluster. If you boot BLUE and everything is proeprly setup, forming a cluster with GREEN will be signalled almost immediately.
Any other system will have the same behaviour.
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 05:33 AM
10-16-2008 05:33 AM
Re: VMS cluster
What I did was, taken the image backup of green and restored it in blue.
Shutdown green and changed the IP,Decnet address etc.
Booted up blue and then green.
While booting up it waits to form/join the cluster but both are booted up with 2 seperate cluster.
Note : the networks are connected by an unmanaged switch for testing.
Could this be the reason ?
Please suggest
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 07:02 AM
10-16-2008 07:02 AM
Re: VMS cluster
test it by accessing one of the nodes from the other (set host for decnet, set h/lat for lat, whatever else is setup for non-routed networking).
If nothing is working, then no direct ethernet path exists, and the cluster problem is just a consequence.
Otherwise, if the direct network connection is o.k. : did You do a change in modparams of the SCSSYSTEMID and SCSNODE parameters, and a following autogen ?
And finally: since You initialized blue's systemdisk, I would go a safer way:
first add blue to green as a cluster-member. MOP boot blue from green, so You know the cluster is correctly established and tested.
Finally make an image-backup from green system disk to blue, then boot blue from this cloned disk (don't forget to use the correct root in SRM boot -flag !).
This way You always can boot one system from the others disk in case of disk errors.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2008 08:31 AM
10-16-2008 08:31 AM
SolutionWhat you did do here, in very simplest terms, was expose your disk data to very severe corruption.
This was mentioned earlier, and I'll mention it again:
mess up a cluster configuration, mess up your disk data.
You can pay in time spent reading the manuals and the associated time spent learning, pay in terms of time spent learning through failure and potentially spent unsnarling and restoring disks, pay for formal classroom training, or pay in terms of enlisting more experienced help to set this cluster up for you.
Your data, your choice, of course.