<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: CLUSTER_CONFIG.COM and removing nodes from cluster in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224784#M97814</link>
    <description>&lt;BR /&gt;I know nothing about this, but I wonder about the wisdom of bringing the machine back into the cluster so that you can @CLUSTER_CONFIG to remove it from the cluster again.&lt;BR /&gt;&lt;BR /&gt;I think John said that CLUSTER_CONFIG will remove the node's system root. That would be [SYS0] on the soon-to-be standalone node's system disk (unless CLUSTER_CONFIG is smart enough to not delete the last root on a disk). Will having no SYS0 on that disk make it harder to configure the box as standalone?&lt;BR /&gt;&lt;BR /&gt;For now, I'll go with John with this and say "don't bother".&lt;BR /&gt;</description>
    <pubDate>Thu, 11 Feb 2010 23:39:23 GMT</pubDate>
    <dc:creator>RBrown_1</dc:creator>
    <dc:date>2010-02-11T23:39:23Z</dc:date>
    <item>
      <title>CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224776#M97806</link>
      <description>We run a 12-node Alpha VMS cluster.  Each&lt;BR /&gt;machine has its own system disk.  We wish&lt;BR /&gt;to remove two of the machines from the cluster&lt;BR /&gt;permanently.  I have followed the method&lt;BR /&gt;given in the OpenVMS Cluster Systems manual,&lt;BR /&gt;section 8.3.  I did an orderly shutdown of the&lt;BR /&gt;first machine and powered it off.  I then&lt;BR /&gt;ran CLUSTER_CONFIG.COM on one of the active&lt;BR /&gt;nodes and selected the REMOVE option.  Prompt&lt;BR /&gt;asked for SCS node name, no problem.  Then&lt;BR /&gt;it asked "What is the device name for&lt;BR /&gt;&lt;NODE&gt;'s system root?"  The default value is&lt;BR /&gt;the active system's system disk.  I entered&lt;BR /&gt;$202$DKA0:, which is the system disk of the&lt;BR /&gt;machine that was shutdown.  The procedure&lt;BR /&gt;complained that "$202$DKA0: is not mounted."&lt;BR /&gt;What should I have entered?&lt;BR /&gt;&lt;BR /&gt;This is the first time I've ever had to&lt;BR /&gt;remove a machine from a cluster and the&lt;BR /&gt;documentation is a little lacking in&lt;BR /&gt;examples.&lt;BR /&gt;&lt;BR /&gt;Gareth&lt;/NODE&gt;</description>
      <pubDate>Thu, 11 Feb 2010 19:56:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224776#M97806</guid>
      <dc:creator>Gareth Williams_2</dc:creator>
      <dc:date>2010-02-11T19:56:03Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224777#M97807</link>
      <description>Gareth,&lt;BR /&gt;&lt;BR /&gt;simplest option, IF THAT NODE STILL IS AVAILABLE!!!, then boot it into the cluster and run cluster_config again.&lt;BR /&gt;&lt;BR /&gt;If thet is no longer possible, state so, and we we will take it from there.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 11 Feb 2010 20:32:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224777#M97807</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2010-02-11T20:32:29Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224778#M97808</link>
      <description>The REMOVE option in cluster_config.com deletes the node specific root option and purges the network node information.  &lt;BR /&gt;&lt;BR /&gt;The important bit here are to ensure you maintain quorum.  Revise EXPECTED_VOTES using $SYSGEN if appropriate (where these voting nodes?).  Use $SET CLUSTER/EXPECTED_VOTES or Availability Manager to revise the current value in your running cluster.  A controlled shutdown with the REMOVE_NODE option will also update the running expected votes.  &lt;BR /&gt;&lt;BR /&gt;Since this node has it's own disk, assuming your don't have shared scsi, removing the root isn't an issue.  &lt;BR /&gt;&lt;BR /&gt;Removing the node information from your DECnet database and hosts table may be nice for clean up, you can also argue that leaving those entries and the root may make it easier to bring a node in for testing or upgrades. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Feb 2010 20:41:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224778#M97808</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2010-02-11T20:41:48Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224779#M97809</link>
      <description>Gareth,&lt;BR /&gt;&lt;BR /&gt;if you CANNOT bring back that node, just heed Andy's advise.&lt;BR /&gt;And remember, that you can NOT use that nodes NAME, NOR its SCSSYSTEMID (hence nor, DECnet address) as long as not EVERY cluster node that knew it, has been rebooted.&lt;BR /&gt;&lt;BR /&gt;And do not forget to adjust MODPARAMS.DAT on _EVERY_ remaining node to the newly calculated EXPECTED_VOTES.&lt;BR /&gt;&lt;BR /&gt;Success!&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 11 Feb 2010 20:49:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224779#M97809</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2010-02-11T20:49:39Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224780#M97810</link>
      <description>Gareth,&lt;BR /&gt;&lt;BR /&gt;  As you say, each node has its own system disk, presumably local. Once you've shut the node down, the local system disk is no longer available to other nodes (even assuming it was shared cluster wide in the first place). &lt;BR /&gt;&lt;BR /&gt;  The reason CLUSTER_CONFIG wants to know the device and root is so it can delete them. The actions of REMOVE are:&lt;BR /&gt;&lt;BR /&gt;"&lt;BR /&gt;            o It deletes the node's root directory tree.&lt;BR /&gt;&lt;BR /&gt;            o It removes the node's network information from the network database.&lt;BR /&gt;&lt;BR /&gt;            o If the node has entries in SYS$DEVICES.DAT, any port allocation class for shared SCSI bus access on the node must be re-assigned.&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;  I'd guess that in your case none of these are necessary. The roots are gone, so effectively deleted, dead network data base entries don't matter much, and with 12 nodes I seriously doubt you're using shared SCSI buses.&lt;BR /&gt;&lt;BR /&gt;  The important part of removing a node permanently is:&lt;BR /&gt;&lt;BR /&gt;"&lt;BR /&gt;    If the node being removed is a voting member, EXPECTED_VOTES in each remaining cluster member's MODPARAMS.DAT must be adjusted. The cluster must then be rebooted.&lt;BR /&gt;&lt;BR /&gt;    For instructions, see the "OpenVMS Cluster Systems" manual.&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;which you have to do manually anyway!&lt;BR /&gt;&lt;BR /&gt;Unfortunately, OpenVMS has never properly supported multiple system disk clusters (which is very strange because that's really the strength of the system vs competing platforms). Every system manager has to figure it out for themselves, (and inevitably get bits wrong). Utilities like CLUSTER_CONFIG make many assumptions, and don't cover cases like yours.&lt;BR /&gt;&lt;BR /&gt;There's no technical reason OpenVMS engineering couldn't create a decent suite of utilities to manage multiple system disk clusters, but I guess it's just another case of "accountants win", despite the efforts of some of us to get the functionality hole filled.&lt;BR /&gt;&lt;BR /&gt;All you really need do is make sure voting is correctly reconfigured for the final cluster state. It's probably also worth checking your site specific command procedures for references to the missing nodes.&lt;BR /&gt;&lt;BR /&gt;(if you find any, try to make your procedures independent of node names, see lexical functions like F$GETSYI and F$CSID)</description>
      <pubDate>Thu, 11 Feb 2010 20:51:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224780#M97810</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2010-02-11T20:51:12Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224781#M97811</link>
      <description>&lt;!--!*#--&gt;Thanks for the responses.  I forgot to&lt;BR /&gt;mention in my original message that&lt;BR /&gt;neither of the two machines being removed&lt;BR /&gt;are voting members of the cluster.&lt;BR /&gt;&lt;BR /&gt;I think I can bring the system back up.&lt;BR /&gt;I'm doing this remotely, but there should&lt;BR /&gt;be someone in the office.&lt;BR /&gt;&lt;BR /&gt;So the general consensus appears to be:&lt;BR /&gt;1) bring the machine back up&lt;BR /&gt;2) rerun CLUSTER_CONFIG&lt;BR /&gt;3) enter $202$DKA0: as the system disk name&lt;BR /&gt;4) after CLUSTER_CONFIG completes, use&lt;BR /&gt;   SYSMAN to shut the machine down.&lt;BR /&gt;&lt;BR /&gt;Gareth</description>
      <pubDate>Thu, 11 Feb 2010 21:37:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224781#M97811</guid>
      <dc:creator>Gareth Williams_2</dc:creator>
      <dc:date>2010-02-11T21:37:13Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224782#M97812</link>
      <description>Gareth,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;1) bring the machine back up&lt;BR /&gt;&lt;BR /&gt;  I disagree. Why bother? What's it going to achieve?&lt;BR /&gt; &lt;BR /&gt;  If the nodes don't vote, there's really nothing to configure, other than to remove the nodes from the network data bases. You can do that yourself, or just leave the dead entries.&lt;BR /&gt;&lt;BR /&gt;  Yes CLUSTER_CONFIG may be able to see the disk and root, but it can't delete it while the system is up, so that part will fail.&lt;BR /&gt;&lt;BR /&gt;  Eventually you may want to do a cluster reboot (rolling?) to eliminate the node from the memories of other nodes, but again, why not let that happen by natural attrition. It's mostly cosmetic.&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Feb 2010 22:56:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224782#M97812</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2010-02-11T22:56:57Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224783#M97813</link>
      <description>As John says above, your work is done here.  The cosmetic value of removing nodes from the networking database can quickly be executed only if desired.&lt;BR /&gt;&lt;BR /&gt;Since these nodes have no votes there is no potential issue with quorum.  &lt;BR /&gt;&lt;BR /&gt;Leave early and have one at home.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Feb 2010 23:23:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224783#M97813</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2010-02-11T23:23:19Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224784#M97814</link>
      <description>&lt;BR /&gt;I know nothing about this, but I wonder about the wisdom of bringing the machine back into the cluster so that you can @CLUSTER_CONFIG to remove it from the cluster again.&lt;BR /&gt;&lt;BR /&gt;I think John said that CLUSTER_CONFIG will remove the node's system root. That would be [SYS0] on the soon-to-be standalone node's system disk (unless CLUSTER_CONFIG is smart enough to not delete the last root on a disk). Will having no SYS0 on that disk make it harder to configure the box as standalone?&lt;BR /&gt;&lt;BR /&gt;For now, I'll go with John with this and say "don't bother".&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Feb 2010 23:39:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224784#M97814</guid>
      <dc:creator>RBrown_1</dc:creator>
      <dc:date>2010-02-11T23:39:23Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224785#M97815</link>
      <description>OK, the consensus seems to be just leave&lt;BR /&gt;it as it.  I will accept the advice of&lt;BR /&gt;those more knowledgeable than me.&lt;BR /&gt;&lt;BR /&gt;Thanks to all,&lt;BR /&gt;Gareth</description>
      <pubDate>Fri, 12 Feb 2010 01:16:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224785#M97815</guid>
      <dc:creator>Gareth Williams_2</dc:creator>
      <dc:date>2010-02-12T01:16:59Z</dc:date>
    </item>
    <item>
      <title>Re: CLUSTER_CONFIG.COM and removing nodes from cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224786#M97816</link>
      <description>Question answered.</description>
      <pubDate>Fri, 12 Feb 2010 01:19:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-config-com-and-removing-nodes-from-cluster/m-p/5224786#M97816</guid>
      <dc:creator>Gareth Williams_2</dc:creator>
      <dc:date>2010-02-12T01:19:10Z</dc:date>
    </item>
  </channel>
</rss>

