<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: OpenVMS system crash down after configure the cluster service in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898477#M103944</link>
    <description>&lt;P&gt;Dear Dave:&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Thanks for your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;I have changed the volume label of the two server as follows:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;FCNOD1$DKA200 VOLUME LABEL=FCNOD1&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;FCNOD2$DKA200 VOLUME LABEL=FCNOD2&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;However, the problem still happens.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; When I configure the cluster, I have set the alloclass parameter of both node to the same value.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; Should I set them to different value when both servers are set to boot server?&lt;/P&gt;&lt;P&gt;Looking forward to your reply.&lt;/P&gt;&lt;P&gt;Thanks very much!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Br&lt;/P&gt;&lt;P&gt;Tong&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 14 Sep 2016 02:08:26 GMT</pubDate>
    <dc:creator>albert000</dc:creator>
    <dc:date>2016-09-14T02:08:26Z</dc:date>
    <item>
      <title>OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898136#M103940</link>
      <description>&lt;P&gt;Dear:&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; We meet a new problem when configuring the OpenVMS cluster.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; The cluster contains 2 nodes(FCNOD1 and FCNOD2), using TCP/IP to communicate.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; I configure the FCNOD1 at first, and then it waits for the second node.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; When I have configured the FCNOD2, the FCNOD1 can find it. But soon it goes to coredump state, the detail is as follows:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="FCNOD1.png" style="width: 581px;"&gt;&lt;img src="https://community.hpe.com/t5/image/serverpage/image-id/83823i5AF4F7C367A380D4/image-size/large?v=v2&amp;amp;px=2000" role="button" title="FCNOD1.png" alt="FCNOD1.png" /&gt;&lt;/span&gt;﻿&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;Now the FCNOD2 is in the waiting state , the detail is as follows:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="FCNOD2.png" style="width: 578px;"&gt;&lt;img src="https://community.hpe.com/t5/image/serverpage/image-id/83824iC852C74D2E1141C3/image-size/large?v=v2&amp;amp;px=2000" role="button" title="FCNOD2.png" alt="FCNOD2.png" /&gt;&lt;/span&gt;﻿&lt;/P&gt;&lt;P&gt;&amp;nbsp;I have restarted the FCNOD1 for serval times , but each time it enters the same abnormal state.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;Could you please tell me why the first node will have a coredump problem?&lt;/P&gt;</description>
      <pubDate>Tue, 13 Sep 2016 01:33:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898136#M103940</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-13T01:33:55Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898141#M103941</link>
      <description>&lt;P&gt;&amp;nbsp;&amp;nbsp; Others' opinions may differ, but I find it much easier to deal with&lt;BR /&gt;plain text than with pictures of plain text.&lt;/P&gt;&lt;P&gt;&amp;gt; [...] status = 0072832C&lt;/P&gt;&lt;P&gt;alp $ exit %x0072832C&lt;BR /&gt;%MOUNT-F-DIFVOLMNT, different volume already mounted on this device&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; help /mess DIFVOLMNT&lt;/P&gt;&lt;P&gt;&amp;gt; The cluster contains 2 nodes [...]&lt;/P&gt;&lt;P&gt;&amp;nbsp; Not a very detailed description of anything.&amp;nbsp; Given that error&lt;BR /&gt;status, it might be helpful to know something about the system disk(s)&lt;BR /&gt;of these two systems: what, where, connected how, label(s), and so on.&lt;/P&gt;&lt;P&gt;&amp;gt; [...] I64 [...]&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Some hints as to the server model(s) (beyond "I64") might also be&lt;BR /&gt;interesting.&lt;/P&gt;</description>
      <pubDate>Tue, 13 Sep 2016 04:15:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898141#M103941</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-13T04:15:41Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898168#M103942</link>
      <description>&lt;P&gt;Thanks for your reply.&lt;/P&gt;&lt;P&gt;Some more information about the cluster is as follows:&lt;/P&gt;&lt;P&gt;FCNOD1: installed in HP C7000 blade server with 2 local disk:DKA100 &amp;amp;DKA200. The OpenVMS 8.4 OS is installed in DKA200.&lt;/P&gt;&lt;P&gt;FCNOD2:&lt;SPAN&gt;installed in HP C7000 blade server with 2 local disk:DKA100 &amp;amp;DKA200. The O&lt;SPAN&gt;pen&lt;/SPAN&gt;&lt;SPAN&gt;VMS&lt;/SPAN&gt;&lt;SPAN&gt; 8.4&amp;nbsp;&lt;/SPAN&gt;OS is installed in DKA200.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I connect these two nodes to a SAN storage system, and map 6 LUN to these two nodes, then I use the second LUN as the quorum disk in both nodes.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Please check if the above information is enough.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Looking forward to your reply.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 13 Sep 2016 07:28:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898168#M103942</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-13T07:28:22Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898448#M103943</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;So you are using a local disk for the system disk on each system? That is not normally how a "regular" VMS cluster is done - it&amp;nbsp;is usually a&amp;nbsp;single common system disk that both systems boot off of in shared storage (SAN disk, in your case) - albeit each system has it's own "root" that it boots into. You may wish to review the "&lt;EM&gt;Guidelines for OpenVMS Cluster Configurations&lt;/EM&gt;" manual.&lt;/P&gt;&lt;P&gt;If you do intend for each system to have it's own local system disk, which makes things like applying patches twice as hard, then you will want to rename the volume label on at least one of your system disks, as it would seem that your configuration is trying to share those system disks across the cluster to the other node and can't mount the second one coming up because they have the same label. The easiest way to&amp;nbsp;change the&amp;nbsp;label&amp;nbsp;is probably to boot off the DVD, get to the place to issue DCL commands and use:&lt;/P&gt;&lt;P&gt;$ SET VOLUME/LABEL=x DKA200:&lt;/P&gt;&lt;P&gt;where x is the new name.&lt;/P&gt;&lt;P&gt;Regards.&lt;/P&gt;&lt;P&gt;Dave&lt;/P&gt;</description>
      <pubDate>Tue, 13 Sep 2016 20:52:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898448#M103943</guid>
      <dc:creator>David R. Lennon</dc:creator>
      <dc:date>2016-09-13T20:52:52Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898477#M103944</link>
      <description>&lt;P&gt;Dear Dave:&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Thanks for your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;I have changed the volume label of the two server as follows:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;FCNOD1$DKA200 VOLUME LABEL=FCNOD1&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;FCNOD2$DKA200 VOLUME LABEL=FCNOD2&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;However, the problem still happens.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; When I configure the cluster, I have set the alloclass parameter of both node to the same value.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; Should I set them to different value when both servers are set to boot server?&lt;/P&gt;&lt;P&gt;Looking forward to your reply.&lt;/P&gt;&lt;P&gt;Thanks very much!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Br&lt;/P&gt;&lt;P&gt;Tong&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2016 02:08:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898477#M103944</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-14T02:08:26Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898490#M103945</link>
      <description>&lt;P&gt;&amp;gt; I have changed the volume label of the two server [...]&lt;/P&gt;&lt;P&gt;Have you shut down both systems since this change? (A cluster member&lt;BR /&gt;can remember many things, even when other cluster members go away and&lt;BR /&gt;then return.)&lt;/P&gt;&lt;P&gt;&amp;gt; Should I set them to different value when both servers are set to boot&lt;BR /&gt;&amp;gt; server?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I believe that ALLOCLASS is significant when systems use shared&lt;BR /&gt;storage devices.&amp;nbsp; I would not expect the device you boot from to affect&lt;BR /&gt;this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I have a small collection of Alpha and IA64 systems, each of which&lt;BR /&gt;has its own system (boot) disk.&amp;nbsp; I have no such trouble with my cluster.&lt;BR /&gt;So long as each disk has (and has had since the first cluster member&lt;BR /&gt;booted) a unique label, I would not expect a problem like this.&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2016 03:52:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898490#M103945</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-14T03:52:27Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898491#M103946</link>
      <description>&lt;P&gt;We&amp;nbsp;use a different allocation class for each node in the cluster as well as a different volume label for local disks. Typically we also make all of the disk devices on a SAN use the same allocation class (eg $1$dga100, $1$dga101 etc) and another allocation class for tape&amp;nbsp;drives (eg $2$mga2)&lt;/P&gt;&lt;P&gt;If each of the nodes has a dka200 local&amp;nbsp;scsi disk then you will see them as $3$dka200, $4$dka200, $5$dka200 and so on when you do a show device d&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2016 03:53:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898491#M103946</guid>
      <dc:creator>Mark Hurcombe</dc:creator>
      <dc:date>2016-09-14T03:53:09Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS system crash down after configure the cluster service</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898556#M103947</link>
      <description>&lt;P&gt;Finally I change the allocation class to different value, and now it works.&lt;/P&gt;&lt;P&gt;The cluster can start up successfully.&lt;/P&gt;&lt;P&gt;Thanks very much for your help.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;BR&lt;/P&gt;&lt;P&gt;TONG&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2016 09:25:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-system-crash-down-after-configure-the-cluster-service/m-p/6898556#M103947</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-14T09:25:29Z</dc:date>
    </item>
  </channel>
</rss>

