<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Adding a test-rig node to VMS cluster in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948320#M75602</link>
    <description>I'm planning to add a node to a VMS/Alpha cluster  to test/evaluate software, patches, etc., before they get pushed into the production and development environments.&lt;BR /&gt; The cluster is OVMS/Alpha V7.3-2, with two nodes (prod &amp;amp; dev) sharing a common system disk. I use a quorum disk.&lt;BR /&gt; I plan  for the third node to be a member of the cluster, and have its own V7.3-2 system disk that can be used to test patches, etc.&lt;BR /&gt; I also plan to create an alternate system disk with V8.2 on it for testing and evaluation, so that the third node can be booted with either system disk, sometimes running V7.3-2 and other times running V8.2.&lt;BR /&gt; So the question is, will it freak out the cluster if the test-rig joins the cluster occasionally running a different version of VMS than the time before?&lt;BR /&gt; Would it be better to create another node persona (node name and IP) for the V8.2 system, and pretend to be a fourth node when the other disk is booted? Would this create quorum issues?&lt;BR /&gt;&lt;BR /&gt;Thanks for your thoughts!</description>
    <pubDate>Wed, 21 Dec 2005 13:11:59 GMT</pubDate>
    <dc:creator>David Lloyd_4</dc:creator>
    <dc:date>2005-12-21T13:11:59Z</dc:date>
    <item>
      <title>Adding a test-rig node to VMS cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948320#M75602</link>
      <description>I'm planning to add a node to a VMS/Alpha cluster  to test/evaluate software, patches, etc., before they get pushed into the production and development environments.&lt;BR /&gt; The cluster is OVMS/Alpha V7.3-2, with two nodes (prod &amp;amp; dev) sharing a common system disk. I use a quorum disk.&lt;BR /&gt; I plan  for the third node to be a member of the cluster, and have its own V7.3-2 system disk that can be used to test patches, etc.&lt;BR /&gt; I also plan to create an alternate system disk with V8.2 on it for testing and evaluation, so that the third node can be booted with either system disk, sometimes running V7.3-2 and other times running V8.2.&lt;BR /&gt; So the question is, will it freak out the cluster if the test-rig joins the cluster occasionally running a different version of VMS than the time before?&lt;BR /&gt; Would it be better to create another node persona (node name and IP) for the V8.2 system, and pretend to be a fourth node when the other disk is booted? Would this create quorum issues?&lt;BR /&gt;&lt;BR /&gt;Thanks for your thoughts!</description>
      <pubDate>Wed, 21 Dec 2005 13:11:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948320#M75602</guid>
      <dc:creator>David Lloyd_4</dc:creator>
      <dc:date>2005-12-21T13:11:59Z</dc:date>
    </item>
    <item>
      <title>Re: Adding a test-rig node to VMS cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948321#M75603</link>
      <description>David,&lt;BR /&gt;&lt;BR /&gt;no need to!&lt;BR /&gt;&lt;BR /&gt;We have essentially the same:&lt;BR /&gt;4 node cluster plus one node that can be configured as needed:&lt;BR /&gt;A: It CAN be added to the cluster (it has its own root on the common system disk)&lt;BR /&gt;- this is used if one node for any reason leaves for some time, but mainly during the relocations we have done: add #5, shutdown another, relocate, join; repeat; #5 leaves again&lt;BR /&gt;B: It has a 2 system disks (2 different VMS versions) with copies of CLUSTER_AUTHORIZE.DAT.&lt;BR /&gt;- boot conversational, set VAXCLUSTER = 2, and it joins nicely.&lt;BR /&gt;- conv boot &amp;amp; VAXCLUSTER = 0 , and it boots standalone (of course, the startup sequence has to distinguish between cluster or standalone config)&lt;BR /&gt;&lt;BR /&gt;Be aware though to have your startup conforming to the rest of the cluster when joining!&lt;BR /&gt;&lt;BR /&gt;DISCLAIMER: use at your own risk. This DOES require that you KNOW what you are doing!&lt;BR /&gt;&lt;BR /&gt;Please, keep us informed on your results,&lt;BR /&gt;&lt;BR /&gt;Success.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Wed, 21 Dec 2005 15:29:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948321#M75603</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-12-21T15:29:20Z</dc:date>
    </item>
    <item>
      <title>Re: Adding a test-rig node to VMS cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948322#M75604</link>
      <description>There's a doomed/may-work/supported matrix&lt;BR /&gt;somewhere for different VMS versions in a&lt;BR /&gt;cluster.  I believe that no one is likely to&lt;BR /&gt;care if the VMS version changes between&lt;BR /&gt;shutdown and startup for a node, but a&lt;BR /&gt;totally different identity could avoid some&lt;BR /&gt;user-confusion as well as playing it safe on&lt;BR /&gt;the version question.&lt;BR /&gt;&lt;BR /&gt;I normally have the loose, odd-ball systems&lt;BR /&gt;which may join my cluster set to zero votes,&lt;BR /&gt;so they can't accidentally cause a quorum&lt;BR /&gt;loss when they go nuts and/or die.</description>
      <pubDate>Wed, 21 Dec 2005 15:51:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948322#M75604</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2005-12-21T15:51:00Z</dc:date>
    </item>
    <item>
      <title>Re: Adding a test-rig node to VMS cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948323#M75605</link>
      <description>&amp;gt;will it freak out the cluster if &lt;BR /&gt;&amp;gt;the test-rig joins the cluster &lt;BR /&gt;&amp;gt;occasionally running a different &lt;BR /&gt;&amp;gt;version of VMS than the time before?&lt;BR /&gt;&lt;BR /&gt;No. You can boot whatever version you want (within sanity limits), and bounce back and forth at will. The other nodes won't mind at all.&lt;BR /&gt;&lt;BR /&gt;Just make sure the test node has zero votes and doesn't automatically mount any critical production data (or, more generally, any disks on physically shared storage, like Fibre Channel). This will allow it to come and go anytime without interrupting other nodes, and without being a threat to data integrity.&lt;BR /&gt;&lt;BR /&gt;Be aware that booting from a second system disk is non trivial. You need to make sure all systems use the same, common cluster environment. It sounds like in your case the test node should hook up to the environment (ie: define SYSUAF, RIGHTSLIST etc) to point to the "main" system disk. &lt;BR /&gt;&lt;BR /&gt;See SYLOGICALS.TEMPLATE for a complete list of files you need to consider. Note that many of these are NOT NEGOTIABLE. Unless all nodes agree, and use the same physical files you can get some very odd behaviour.&lt;BR /&gt;</description>
      <pubDate>Wed, 21 Dec 2005 16:49:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948323#M75605</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2005-12-21T16:49:26Z</dc:date>
    </item>
    <item>
      <title>Re: Adding a test-rig node to VMS cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948324#M75606</link>
      <description>Small addendum to the above answer:&lt;BR /&gt;&lt;BR /&gt;The extra system has NO direct connection to any of the disks of the regular cluster config. No way to get to the cluster disks if not a cluster member.&lt;BR /&gt;Booting it from the common system disk is done by booting it as a satellite, and using mounting all disks via MSCP.&lt;BR /&gt;&lt;BR /&gt;Cluster boot from one of the local system disks mounts the cluster common disk (with the rest of the bootstrap environment, and the cluster common files like SYSUAF etc.) and continues from there on with a 'normal' clusternode startup.&lt;BR /&gt;&lt;BR /&gt;hth.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 22 Dec 2005 06:44:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948324#M75606</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-12-22T06:44:41Z</dc:date>
    </item>
    <item>
      <title>Re: Adding a test-rig node to VMS cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948325#M75607</link>
      <description>Success! Finally got the opportunity to do this.&lt;BR /&gt;I made an image copy of the cluster system disk, changed the [sys0] node name using information from the OVMS FAQ. Then with the hints provided by the forum, I was successful.&lt;BR /&gt;I now have a cluster with three nodes, where one node boots from its own system disk, which is a copy of the original system disk. I can know use this third node to test system patches.&lt;BR /&gt;Thanks for all the help!&lt;BR /&gt;&lt;BR /&gt;--dave</description>
      <pubDate>Mon, 17 Apr 2006 15:50:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/adding-a-test-rig-node-to-vms-cluster/m-p/4948325#M75607</guid>
      <dc:creator>David Lloyd_4</dc:creator>
      <dc:date>2006-04-17T15:50:41Z</dc:date>
    </item>
  </channel>
</rss>

