- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: VMS cluster
Operating System - OpenVMS
1748184
Members
4009
Online
108759
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-16-2008 02:23 AM
тАО10-16-2008 02:23 AM
Re: VMS cluster
Follow recommendations by others: read, and _fully_understand_ the documentation.
If you are running a cluster you'll need to take precaustions to prevent problems like hang of a surviving node if one node stops and split clusters... How to provent this happening is described in the documentation.
Running a 2-node cluster ahs some problems in itself, but all is covered in the documentation.
Now your current scheme:
You now have a "cluster" of 1 node: GREEN. To add BLUE, check the MDOPARAMS.DAT file on GREEN to see what clsuter_config has addded here. Copy this data to MODPARAMS.DAT on BLUE, chnage what's required (I don't think there is anything to change but I cannot check on my nodes), run AUTOGEN to create new configuration, copy GREEN's CLUSTER_AUTHORIZE.DAT to BLUE and reboot.
Do the same with RED and YELLOW, or whatever names you gave to the other systems you want to add.
That _should_ be it.
If you boot GREEN by itself, it will take some time before it decides to form a cluster. If you boot BLUE and everything is proeprly setup, forming a cluster with GREEN will be signalled almost immediately.
Any other system will have the same behaviour.
If you are running a cluster you'll need to take precaustions to prevent problems like hang of a surviving node if one node stops and split clusters... How to provent this happening is described in the documentation.
Running a 2-node cluster ahs some problems in itself, but all is covered in the documentation.
Now your current scheme:
You now have a "cluster" of 1 node: GREEN. To add BLUE, check the MDOPARAMS.DAT file on GREEN to see what clsuter_config has addded here. Copy this data to MODPARAMS.DAT on BLUE, chnage what's required (I don't think there is anything to change but I cannot check on my nodes), run AUTOGEN to create new configuration, copy GREEN's CLUSTER_AUTHORIZE.DAT to BLUE and reboot.
Do the same with RED and YELLOW, or whatever names you gave to the other systems you want to add.
That _should_ be it.
If you boot GREEN by itself, it will take some time before it decides to form a cluster. If you boot BLUE and everything is proeprly setup, forming a cluster with GREEN will be signalled almost immediately.
Any other system will have the same behaviour.
Willem Grooters
OpenVMS Developer & System Manager
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-16-2008 05:33 AM
тАО10-16-2008 05:33 AM
Re: VMS cluster
Thanks.
What I did was, taken the image backup of green and restored it in blue.
Shutdown green and changed the IP,Decnet address etc.
Booted up blue and then green.
While booting up it waits to form/join the cluster but both are booted up with 2 seperate cluster.
Note : the networks are connected by an unmanaged switch for testing.
Could this be the reason ?
Please suggest
Thanks
What I did was, taken the image backup of green and restored it in blue.
Shutdown green and changed the IP,Decnet address etc.
Booted up blue and then green.
While booting up it waits to form/join the cluster but both are booted up with 2 seperate cluster.
Note : the networks are connected by an unmanaged switch for testing.
Could this be the reason ?
Please suggest
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-16-2008 07:02 AM
тАО10-16-2008 07:02 AM
Re: VMS cluster
No, if the switch works, then this is not the reason for the cluster separation.
test it by accessing one of the nodes from the other (set host for decnet, set h/lat for lat, whatever else is setup for non-routed networking).
If nothing is working, then no direct ethernet path exists, and the cluster problem is just a consequence.
Otherwise, if the direct network connection is o.k. : did You do a change in modparams of the SCSSYSTEMID and SCSNODE parameters, and a following autogen ?
And finally: since You initialized blue's systemdisk, I would go a safer way:
first add blue to green as a cluster-member. MOP boot blue from green, so You know the cluster is correctly established and tested.
Finally make an image-backup from green system disk to blue, then boot blue from this cloned disk (don't forget to use the correct root in SRM boot -flag !).
This way You always can boot one system from the others disk in case of disk errors.
test it by accessing one of the nodes from the other (set host for decnet, set h/lat for lat, whatever else is setup for non-routed networking).
If nothing is working, then no direct ethernet path exists, and the cluster problem is just a consequence.
Otherwise, if the direct network connection is o.k. : did You do a change in modparams of the SCSSYSTEMID and SCSNODE parameters, and a following autogen ?
And finally: since You initialized blue's systemdisk, I would go a safer way:
first add blue to green as a cluster-member. MOP boot blue from green, so You know the cluster is correctly established and tested.
Finally make an image-backup from green system disk to blue, then boot blue from this cloned disk (don't forget to use the correct root in SRM boot -flag !).
This way You always can boot one system from the others disk in case of disk errors.
http://www.mpp.mpg.de/~huber
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-16-2008 08:31 AM
тАО10-16-2008 08:31 AM
Solution
The use of an unmanaged switch is not relevant here.
What you did do here, in very simplest terms, was expose your disk data to very severe corruption.
This was mentioned earlier, and I'll mention it again:
mess up a cluster configuration, mess up your disk data.
You can pay in time spent reading the manuals and the associated time spent learning, pay in terms of time spent learning through failure and potentially spent unsnarling and restoring disks, pay for formal classroom training, or pay in terms of enlisting more experienced help to set this cluster up for you.
Your data, your choice, of course.
What you did do here, in very simplest terms, was expose your disk data to very severe corruption.
This was mentioned earlier, and I'll mention it again:
mess up a cluster configuration, mess up your disk data.
You can pay in time spent reading the manuals and the associated time spent learning, pay in terms of time spent learning through failure and potentially spent unsnarling and restoring disks, pay for formal classroom training, or pay in terms of enlisting more experienced help to set this cluster up for you.
Your data, your choice, of course.
- « Previous
-
- 1
- 2
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP