<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: cluster node hangs when another node shutdown in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900576#M103979</link>
    <description>&lt;P&gt;&amp;gt; I have tried to added quorum disk, and it fails.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Eh?&amp;nbsp; As usual, showing actual commands with their actual output can&lt;BR /&gt;be more helpful than vague descriptions or interpretations.&amp;nbsp; (Do you&lt;BR /&gt;mean that you couldn't add a _second_ quorum disk?&amp;nbsp; That restriction is&lt;BR /&gt;documented.&amp;nbsp; You can't do that.)&lt;/P&gt;&lt;P&gt;&amp;gt; Since quorum disk can't be changed,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Why do you want to change it?&amp;nbsp; What is it now?&amp;nbsp; To what would you&lt;BR /&gt;like to change it?&lt;/P&gt;&lt;P&gt;&amp;gt; I have tried to change expected_votes to 1.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; There's little sense in setting EXPECTED_VOTES to some unrealistic&lt;BR /&gt;value.&lt;/P&gt;&lt;P&gt;&amp;gt; However when the system rebooted, it outputs a error,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Should we guess what that error message was, or are you willing to&lt;BR /&gt;tell us?&lt;BR /&gt;&lt;BR /&gt;&amp;gt; and the expected_votes is changed back automatically.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; That's why there's little sense in setting EXPECTED_VOTES to some&lt;BR /&gt;unrealistic value.&amp;nbsp; The cluster software can (and does) count the VOTES&lt;BR /&gt;of the cluster members when they join the cluster.&amp;nbsp; Trying to fool it&lt;BR /&gt;with an unrealistic EXPECTED_VOTES value is a waste of time and effort.&lt;BR /&gt;Why are you trying to set it to 1 (when it should be 3)?&lt;/P&gt;&lt;P&gt;&amp;gt; What can I do in this case?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I don't know what "this case" is.&amp;nbsp; As before, I'd like to see what&lt;BR /&gt;the following parameters are for each of the two nodes:&lt;/P&gt;&lt;P&gt;VAXCLUSTER&lt;BR /&gt;EXPECTED_VOTES&amp;nbsp; (And, if it's not 3, why not?)&lt;BR /&gt;VOTES&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (And, if it's not 1, why not?)&lt;BR /&gt;DISK_QUORUM&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (And, if it's not the same on both nodes, why not?)&lt;BR /&gt;QDSKVOTES&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (And, if it's not 1, why not?)&lt;/P&gt;&lt;P&gt;&amp;gt; Are you mounting the quorum disk on each of the cluster member&lt;BR /&gt;&amp;gt; systems?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Still wondering.&lt;/P&gt;</description>
    <pubDate>Wed, 21 Sep 2016 06:22:36 GMT</pubDate>
    <dc:creator>Steven Schweda</dc:creator>
    <dc:date>2016-09-21T06:22:36Z</dc:date>
    <item>
      <title>cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6899759#M103972</link>
      <description>Dear : we meet another problem when we configure the OpenVMS clusters in OpenVMS 8.4 Update1000 of IA64 architecture. I have configured a 2-nodes OpenVMS cluster as follows: For HWNOD1, it uses a san storage as system disk. For HWNOD2, it use local disk as system disk. $1$DGA1 is the system disk from SAN storage. $1$DGA3 is the quorum disk from SAN storage. However each time when I shutdown one cluster node, the other node will fails to response any command. It will hangs any command I have entered until the shutdowned node starts up again. Could you tell me how to solve this problem? Looking forward to your reply. BR TONG</description>
      <pubDate>Mon, 19 Sep 2016 03:57:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6899759#M103972</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-19T03:57:37Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6899764#M103973</link>
      <description>&lt;P&gt;You'd get much more valuable responses from the forums related to OpenVMS System Management.&amp;nbsp; However... You need to be more complete in your description of your cluster.&amp;nbsp; While you've set up two nodes to use individual system disks this, alone, will NOT make your system more redundant or resilient.&amp;nbsp; A cluster of OpenVMS systems is not like (m)any other clustering technologies and one of the core concepts you need to research and learn is that of "*QUORUM*" which, put simply, is a voting scheme that determines if you have enough votes (votes are assigned to both nodes and, in some cases, a disk which is A) expected to be present and (generally) B) shared between the two systems in the cluster.&lt;/P&gt;&lt;P&gt;Sharing the output from the SHOW CLUSTER utility doesn't present itself well in these forums.&amp;nbsp; I would recommend, instead, providing the output from the SYSMAN utility (from the command prompt: $ MC SYSMAN which requires a priviledged account):&lt;/P&gt;&lt;P&gt;SYSMAN&amp;gt; PARAM SHOW /CLUSTER&lt;/P&gt;&lt;P&gt;SYSMAN&amp;gt; PARAM SHOW /SCS&lt;/P&gt;&lt;P&gt;It would also be beneficial to look into the documentation with a focus on OpenVMS Cluster configurations and OpenVMS system management.&lt;/P&gt;&lt;P&gt;Please understand that there are features and configuration issues or items that a general forum can't decide FOR you.&amp;nbsp; You and your company or organization need to know how these systems work, what their strengths are and how their setup and configuration can best support your collective requirements.&amp;nbsp; While "we" could help you with what appear to be simple questions like what you're asking above they're really NOT simple and must be properly setup for the configuration you need and how you expect that cluster to act.&lt;/P&gt;&lt;P&gt;I would recommend working with local HP resources if you need more in-depth guidance and/or training, frankly.&lt;/P&gt;&lt;P&gt;bob&lt;/P&gt;</description>
      <pubDate>Mon, 19 Sep 2016 04:40:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6899764#M103973</guid>
      <dc:creator>Bob Blunt</dc:creator>
      <dc:date>2016-09-19T04:40:08Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900233#M103974</link>
      <description>&lt;P&gt;Dear Bob:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; The detail information of the cluster is as follows:&lt;/P&gt;&lt;P&gt;SYSMAN&amp;gt; PARAM SHOW/CLUSTER&lt;/P&gt;&lt;P&gt;%SYSMAN-I-USEACTNOD, a USE ACTIVE has been defaulted on node HWNOD1&lt;BR /&gt;Node HWNOD1: Parameters in use: ACTIVE&lt;BR /&gt;Parameter Name Current Default Minimum Maximum Unit Dynamic&lt;/P&gt;&lt;P&gt;-------------- ------- ------- ------- ------- ---- -------&lt;BR /&gt;VAXCLUSTER 2 1 0 2 Coded-value&lt;BR /&gt;EXPECTED_VOTES 2 1 1 127 Votes&lt;BR /&gt;VOTES 1 1 0 127 Votes&lt;BR /&gt;DISK_QUORUM "$1$DGA3 " " " " " "ZZZZ" Ascii&lt;BR /&gt;QDSKVOTES 1 1 0 127 Votes&lt;BR /&gt;QDSKINTERVAL 3 3 1 32767 Seconds&lt;BR /&gt;ALLOCLASS 5 0 0 255 Pure-number&lt;BR /&gt;LOCKDIRWT 1 0 0 255 Pure-number&lt;BR /&gt;CLUSTER_CREDITS 32 32 10 128 Credits&lt;BR /&gt;NISCS_CONV_BOOT 0 0 0 1 Boolean&lt;BR /&gt;NISCS_LOAD_PEA0 1 0 0 1 Boolean&lt;BR /&gt;NISCS_USE_LAN 1 1 0 1 Boolean&lt;BR /&gt;NISCS_USE_UDP 1 0 0 1 Boolean&lt;BR /&gt;MSCP_LOAD 1 0 0 16384 Coded-value&lt;BR /&gt;TMSCP_LOAD 0 0 0 3 Coded-value&lt;BR /&gt;MSCP_SERVE_ALL 1 4 0 -1 Bit-Encoded&lt;BR /&gt;TMSCP_SERVE_ALL 0 0 0 -1 Bit-Encoded&lt;BR /&gt;MSCP_BUFFER 1024 1024 256 -1 Coded-value&lt;BR /&gt;MSCP_CREDITS 32 32 2 1024 Coded-value&lt;BR /&gt;TAPE_ALLOCLASS 0 0 0 255 Pure-number&lt;BR /&gt;NISCS_MAX_PKTSZ 8192 8192 576 9180 Bytes&lt;BR /&gt;CWCREPRC_ENABLE 1 1 0 1 Bitmask D&lt;BR /&gt;RECNXINTERVAL 20 20 1 32767 Seconds D&lt;BR /&gt;NISCS_PORT_SERV 0 0 0 256 Bitmask D&lt;BR /&gt;NISCS_UDP_PORT 0 0 0 65535 Pure-number D&lt;BR /&gt;NISCS_UDP_PKTSZ 8192 8192 576 9000 Bytes&lt;BR /&gt;MSCP_CMD_TMO 0 0 0 2147483647 Seconds D&lt;BR /&gt;LOCKRMWT 5 5 0 10 Pure-number D&lt;BR /&gt;&lt;BR /&gt;SYSMAN&amp;gt;&lt;BR /&gt;SYSMAN&amp;gt; PARAM SHOW/SCS&lt;BR /&gt;Node HWNOD1: Parameters in use: ACTIVE&lt;BR /&gt;Parameter Name Current Default Minimum Maximum Unit Dynamic&lt;BR /&gt;-------------- ------- ------- ------- ------- ---- -------&lt;BR /&gt;SCSBUFFCNT 512 50 0 32767 Entries&lt;BR /&gt;SCSRESPCNT 1000 1000 0 32767 Entries&lt;BR /&gt;SCSMAXDG 576 576 28 985 Bytes&lt;BR /&gt;SCSMAXMSG 216 216 60 985 Bytes&lt;BR /&gt;SCSSYSTEMID 1025 0 0 -1 Pure-number&lt;BR /&gt;SCSSYSTEMIDH 0 0 0 -1 Pure-number&lt;BR /&gt;SCSNODE "HWNOD1 " " " " " "ZZZZ" Ascii&lt;BR /&gt;PASTDGBUF 16 4 1 16 Buffers&lt;BR /&gt;SMCI_PORTS 1 1 0 -1 Bitmask&lt;BR /&gt;TIMVCFAIL 1600 1600 100 65535 10Ms D&lt;BR /&gt;SCSFLOWCUSH 1 1 0 16 Credits D&lt;BR /&gt;PRCPOLINTERVAL 30 30 1 32767 Seconds D&lt;BR /&gt;PASTIMOUT 5 5 1 99 Seconds D&lt;BR /&gt;PANUMPOLL 16 16 1 223 Ports D&lt;BR /&gt;PAMAXPORT 32 32 0 223 Port-number D&lt;BR /&gt;PAPOLLINTERVAL 5 5 1 32767 Seconds D&lt;BR /&gt;PAPOOLINTERVAL 15 15 1 32767 Seconds D&lt;BR /&gt;PASANITY 1 1 0 1 Boolean D&lt;BR /&gt;PANOPOLL 0 0 0 1 Boolean D&lt;BR /&gt;SMCI_FLAGS 0 0 0 -1 Bitmask D&lt;BR /&gt;&lt;BR /&gt;SYSMAN&amp;gt;&lt;BR /&gt;SYSMAN&amp;gt; EXIT&lt;BR /&gt;$ SH DEV D&lt;BR /&gt;Device Device Error Volume Free Trans Mnt&lt;BR /&gt;Name Status Count Label Blocks Count Cnt&lt;BR /&gt;$1$DGA1: (HWNOD1) Mounted 0 HWNOD1 153010864 359 1&lt;BR /&gt;$1$DGA2: (HWNOD1) Online 0&lt;BR /&gt;$1$DGA3: (HWNOD1) Online 0&lt;BR /&gt;$1$DGA8: (HWNOD1) Online 0&lt;BR /&gt;$1$DGA9: (HWNOD1) Online 0&lt;BR /&gt;$1$DGA23: (HWNOD1) Online 0&lt;BR /&gt;$5$DKA100: (HWNOD1) Online 0&lt;BR /&gt;$5$DKA200: (HWNOD1) Mounted 0 (remote mount) 1&lt;BR /&gt;$5$DNA0: (HWNOD1) Offline 0&lt;BR /&gt;$5$DNA1: (HWNOD1) Online wrtlck 0&lt;/P&gt;&lt;P&gt;$&lt;/P&gt;&lt;P&gt;$&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Looking forward to your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Thanks very much.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;BR&lt;/P&gt;&lt;P&gt;TONG&lt;/P&gt;</description>
      <pubDate>Tue, 20 Sep 2016 07:39:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900233#M103974</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-20T07:39:04Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900423#M103975</link>
      <description>&lt;P&gt;&amp;gt; SYSMAN&amp;gt; PARAM SHOW/CLUSTER&lt;BR /&gt;&amp;gt; [...]&lt;BR /&gt;&amp;gt; Node HWNOD1: [...]&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Ok.&amp;nbsp; And what do you see on the other node?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Also, which node hangs when you shut down which node?&lt;/P&gt;</description>
      <pubDate>Tue, 20 Sep 2016 16:39:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900423#M103975</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-20T16:39:54Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900535#M103976</link>
      <description>&lt;P&gt;Dear&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; The other node has the exactly same output as node1 except its SCSNODE is HWNOD2 not HWNOD1.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; I have done two test. When HWNOD1 shutdowns, HWNOD2 will hang. When HWNOD2 shutdowns,HWNOD1 will hangs two.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Now this cluster has set the expected_votes to 2, and it has 1 quorum disk.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Is that correct?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; I found the cluster can't add more quorum disk, when I use command "@sys$manager:cluster_config" to enable more quorum disk, the quorum disk set before will disappear.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;BR&lt;/P&gt;&lt;P&gt;TONG&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 01:42:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900535#M103976</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-21T01:42:47Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900539#M103977</link>
      <description>&lt;P&gt;&amp;gt; The other node has the exactly same output as node1 except its SCSNODE&lt;BR /&gt;&amp;gt; is HWNOD2 not HWNOD1.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I'd prefer to see the actual output, and make my own comparison.&lt;/P&gt;&lt;P&gt;&amp;gt; Now this cluster has set the expected_votes to 2, and it has 1 quorum&lt;BR /&gt;&amp;gt; disk.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; If each of the two nodes has one vote, and the quorum disk has one&lt;BR /&gt;vote, then I'd expect EXPECTED_VOTES to be three.&amp;nbsp; The quorum would be&lt;BR /&gt;two, so either node plus the quorum disk would, together, have two&lt;BR /&gt;votes, which would satisfy the quorum requirement.&amp;nbsp; As Mike Kier said in&lt;BR /&gt;your 6898169 thread:&lt;/P&gt;&lt;P&gt;&amp;gt; Your system should have EXPECTED_VOTES = 3 and a QUORUM of 2 with each&lt;BR /&gt;&amp;gt; node having a VOTE of 1, unless there is some compelling reason&lt;BR /&gt;&amp;gt; otherwise.&lt;/P&gt;&lt;P&gt;&amp;gt; I found the cluster can't add more quorum disk, [...]&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Having more than one quorum disk would cause more trouble than it&lt;BR /&gt;would solve.&amp;nbsp; The "OpenVMS Cluster Systems" manual explains quorums and&lt;BR /&gt;"a" (or "the") quorum disk:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Rules: Each OpenVMS Cluster system can include only one quorum&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; disk. [...]&lt;/P&gt;&lt;P&gt;Also:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; o&amp;nbsp; To permit recovery from failure conditions, the quorum disk&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; must be mounted by all disk watchers.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; o The OpenVMS Cluster can include only one quorum disk.&lt;/P&gt;&lt;P&gt;Are you mounting the quorum disk on each of the cluster member systems?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Have you looked at the "OpenVMS Cluster Systems" manual?&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;A href="http://h20565.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c04623183" target="_blank"&gt;http://h20565.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c04623183&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 02:21:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900539#M103977</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-21T02:21:40Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900548#M103978</link>
      <description>&lt;P&gt;Dear:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; I have checked the cluster released by OpenVMS and HDS, and found the quorum disk is only used in 2-nodes cluster. and only 1 quorum disk is recommended.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; I also find that expected_vote and quorum disk number has the following relation:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; estimated quorum = (EXPECTED_VOTES + 2)/2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; I have tried to added quorum disk, and it fails.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp; Since quorum disk can't be changed, I have tried to change expected_votes to 1. However when the system rebooted, it outputs a error, and the expected_votes is changed back automatically.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; What can I do in this case?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Looking forward to your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;BR&lt;/P&gt;&lt;P&gt;TONG&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 03:51:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900548#M103978</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-21T03:51:18Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900576#M103979</link>
      <description>&lt;P&gt;&amp;gt; I have tried to added quorum disk, and it fails.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Eh?&amp;nbsp; As usual, showing actual commands with their actual output can&lt;BR /&gt;be more helpful than vague descriptions or interpretations.&amp;nbsp; (Do you&lt;BR /&gt;mean that you couldn't add a _second_ quorum disk?&amp;nbsp; That restriction is&lt;BR /&gt;documented.&amp;nbsp; You can't do that.)&lt;/P&gt;&lt;P&gt;&amp;gt; Since quorum disk can't be changed,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Why do you want to change it?&amp;nbsp; What is it now?&amp;nbsp; To what would you&lt;BR /&gt;like to change it?&lt;/P&gt;&lt;P&gt;&amp;gt; I have tried to change expected_votes to 1.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; There's little sense in setting EXPECTED_VOTES to some unrealistic&lt;BR /&gt;value.&lt;/P&gt;&lt;P&gt;&amp;gt; However when the system rebooted, it outputs a error,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Should we guess what that error message was, or are you willing to&lt;BR /&gt;tell us?&lt;BR /&gt;&lt;BR /&gt;&amp;gt; and the expected_votes is changed back automatically.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; That's why there's little sense in setting EXPECTED_VOTES to some&lt;BR /&gt;unrealistic value.&amp;nbsp; The cluster software can (and does) count the VOTES&lt;BR /&gt;of the cluster members when they join the cluster.&amp;nbsp; Trying to fool it&lt;BR /&gt;with an unrealistic EXPECTED_VOTES value is a waste of time and effort.&lt;BR /&gt;Why are you trying to set it to 1 (when it should be 3)?&lt;/P&gt;&lt;P&gt;&amp;gt; What can I do in this case?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I don't know what "this case" is.&amp;nbsp; As before, I'd like to see what&lt;BR /&gt;the following parameters are for each of the two nodes:&lt;/P&gt;&lt;P&gt;VAXCLUSTER&lt;BR /&gt;EXPECTED_VOTES&amp;nbsp; (And, if it's not 3, why not?)&lt;BR /&gt;VOTES&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (And, if it's not 1, why not?)&lt;BR /&gt;DISK_QUORUM&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (And, if it's not the same on both nodes, why not?)&lt;BR /&gt;QDSKVOTES&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (And, if it's not 1, why not?)&lt;/P&gt;&lt;P&gt;&amp;gt; Are you mounting the quorum disk on each of the cluster member&lt;BR /&gt;&amp;gt; systems?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Still wondering.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 06:22:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900576#M103979</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-21T06:22:36Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900616#M103980</link>
      <description>&lt;P&gt;Dear:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; I use the following commands to change the Expected_Votes to 3:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; $ RUN SYS$SYSTEM:SYSMAN&lt;BR /&gt;SYSMAN&amp;gt; SET ENVIRONMENT/CLUSTER&lt;BR /&gt;SYSMAN&amp;gt; PARAM USE CURRENT&lt;BR /&gt;SYSMAN&amp;gt; PARAM SET EXPECTED_VOTES 3&lt;BR /&gt;SYSMAN&amp;gt; PARAM WRITE CURRENT&lt;BR /&gt;SYSMAN&amp;gt; SET ENVIRONMENT/CLUSTER&lt;BR /&gt;SYSMAN&amp;gt; DO &lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/535023"&gt;@sys&lt;/a&gt;$UPDATE:AUTOGEN GETDATA SETPARAMS&lt;BR /&gt;SYSMAN&amp;gt; EXIT&lt;/P&gt;&lt;P&gt;The method is found from the following address:&lt;/P&gt;&lt;P&gt;&lt;A href="http://h30266.www3.hp.com/odl/i64os/opsys/vmsos84/4477/4477pro_020.html#post_config" target="_blank"&gt;http://h30266.www3.hp.com/odl/i64os/opsys/vmsos84/4477/4477pro_020.html#post_config&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;Then I restart the cluster. The node with a SAN storage disk as system disk always meet a problem as follows:&lt;/P&gt;&lt;P&gt;**** OpenVMS I64 Operating System V8.4&amp;nbsp;&amp;nbsp; -BUGCHECK ****&lt;/P&gt;&lt;P&gt;**Bugcheck code =000001cc: INVEXCEPTN, Exception while above ASTDEL&lt;/P&gt;&lt;P&gt;** Crash CPU:00000000 Primary CPU: 00000000 Node Name:HWNOD1&lt;/P&gt;&lt;P&gt;**Highest CPU number:00000007&lt;/P&gt;&lt;P&gt;**Active CPUs:00000000.000000FF&lt;/P&gt;&lt;P&gt;**Current Process:NULL&lt;/P&gt;&lt;P&gt;**Current PSB ID:00000001&lt;/P&gt;&lt;P&gt;**Imange Name:&lt;/P&gt;&lt;P&gt;Is the method I change the Expected_votes right?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;BR&lt;/P&gt;&lt;P&gt;TONG&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 09:14:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900616#M103980</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-21T09:14:15Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900716#M103981</link>
      <description>&lt;P&gt;Tong,&lt;/P&gt;&lt;P&gt;you've probably overwritten the desired&amp;nbsp;value of EXPECTED_VOTES by running AUTOGEN again.&lt;/P&gt;&lt;P&gt;The INVEXCEPTN crash may have NOTHING to do with clustering at all. Can you boot HWNOD1 from SAN storage ($1$DGA1) as the only node in the cluster ? Did a QUORUM file get generated on $1$DGA3:[000000]QUORUM.DAT - did you ever MOUNT the quorum disk ?&lt;/P&gt;&lt;P&gt;Check with $ SHOW CLUSTER/CONT, then type ADD CLUSTER. What's shown in the QF_VOTE column ?&lt;/P&gt;&lt;P&gt;Can HWNOD2 DIRECTLY access the quorum disk $1$DGA3: (i.e. does&amp;nbsp;HWNOD2 have a fibre channel connection) ?&lt;/P&gt;&lt;P&gt;Volker.&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2016 12:44:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900716#M103981</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2016-09-21T12:44:53Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900937#M103982</link>
      <description>&lt;P&gt;Dear:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Both HWNOD1 and HWNOD2 can access the quorum disk directly.&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When I boot only HWNOD1 from SAN storage system, then it will enter the following status:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %PEA0,cluster communication enabled on IP interface, WE0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %PEA0,successfully initialized with TCP/IP services&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %PEA0,setting socket option failed.&lt;/P&gt;&lt;P&gt;It will always hang on this step until I boot the other node:HWNOD2, then it can enter the system.&lt;/P&gt;&lt;P&gt;I use TCP/IP for these two node to communicate with each other.&lt;/P&gt;&lt;P&gt;Is it wrong?&lt;/P&gt;&lt;P&gt;Looking forward to your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;BR&lt;/P&gt;&lt;P&gt;TONG&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 22 Sep 2016 02:55:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900937#M103982</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-22T02:55:07Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900941#M103983</link>
      <description>&lt;P&gt;&amp;gt; It will always hang on this step until I boot the other node:HWNOD2,&lt;BR /&gt;&amp;gt; then it can enter the system.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; That suggests (to me) that the quorum disk is not doing its job.&lt;BR /&gt;Previous questions about your quorum disk remain unanswered.&lt;/P&gt;&lt;P&gt;&amp;gt; %PEA0,setting socket option failed.&lt;/P&gt;&lt;P&gt;&amp;gt; I use TCP/IP for these two node to communicate with each other.&lt;BR /&gt;&amp;gt;&lt;BR /&gt;&amp;gt; Is it wrong?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I've never used IP for the cluster interconnect, so I know nothing,&lt;BR /&gt;but...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I don't like the "setting socket option failed" message, but if the&lt;BR /&gt;cluster works with both nodes up, then the cluster interconnect would&lt;BR /&gt;seem to be working properly.&lt;/P&gt;</description>
      <pubDate>Thu, 22 Sep 2016 03:26:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900941#M103983</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-22T03:26:41Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900951#M103984</link>
      <description>&lt;P&gt;&amp;gt; &amp;gt; It will always hang on this step until I boot the other node:HWNOD2,&lt;BR /&gt;&amp;gt; &amp;gt; then it can enter the system.&lt;BR /&gt;&amp;gt;&lt;BR /&gt;&amp;gt; That suggests (to me) that the quorum disk is not doing its job.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; I can't remember if I ever used a quorum disk in a cluster, so I know&lt;BR /&gt;nothing, but...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; The documentation suggests that "the quorum disk must be mounted by&lt;BR /&gt;all disk watchers".&amp;nbsp; The system (boot) disk is mounted by the boot&lt;BR /&gt;procedure, but if the quorum disk is mounted by the normal start-up&lt;BR /&gt;scripts (like SYSTARTUP_VMS.COM), then it won't be available until the&lt;BR /&gt;system is (mostly) up (_after_ forming or joining the cluster).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; If that's true, then the quorum disk would be useless in _forming_&lt;BR /&gt;the cluster; its only value would be in maintaining the quorum when one&lt;BR /&gt;of the cluster members _leaves_ the cluster.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; So, the question would be this: After both nodes have been booted&lt;BR /&gt;(and are cluster members, and have mounted the quorum disk with its&lt;BR /&gt;QUORUM.DAT file), if you shut down one of the cluster members, does the&lt;BR /&gt;other cluster member continue to work, or does the cluster lose its&lt;BR /&gt;quorum, and freeze the remaining cluster member?&lt;/P&gt;</description>
      <pubDate>Thu, 22 Sep 2016 04:05:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6900951#M103984</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-22T04:05:18Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6901138#M103985</link>
      <description>&lt;P&gt;Dear:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; I re-install the 2-nodes with votes=3.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Then I mount the quorum disk with command: mount /noassist /cluster devname vol_label. The cluster info is as follows;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; +-------------------------------------------------------------------------------&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| CLUSTER&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;+--------+-----------+----------+---------+------------+-------------------+----&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| CL_EXP | CL_QUORUM | CL_VOTES | QF_VOTE | CL_MEMBERS | FORMED | LA&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;+--------+-----------+----------+---------+------------+-------------------+----&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| 3&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 2&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | 3 &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp; | YES &amp;nbsp; &amp;nbsp; |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2 | 22-SEP-2016 11:40 | 22-&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;+--------+-----------+----------+---------+------------+-------------------+----&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;Now I restart any node, the other node can still work well.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;Thanks for your help.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;BR&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;TONG&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 22 Sep 2016 12:07:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6901138#M103985</guid>
      <dc:creator>albert000</dc:creator>
      <dc:date>2016-09-22T12:07:41Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6901277#M103986</link>
      <description>&lt;P&gt;&amp;gt; I re-install the 2-nodes with votes=3.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; "with votes=3"?&amp;nbsp; Does that mean one vote (VOTES = 1) for each node,&lt;BR /&gt;plus one vote for the cluster disk (QDSKVOTES = 1), so EXPECTED_VOTES =&lt;BR /&gt;3?&amp;nbsp; If not, then what does it mean?&lt;/P&gt;</description>
      <pubDate>Thu, 22 Sep 2016 16:23:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6901277#M103986</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2016-09-22T16:23:40Z</dc:date>
    </item>
    <item>
      <title>Re: cluster node hangs when another node shutdown</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6901736#M103987</link>
      <description>&lt;P&gt;It should be noted, that you don't need to have the QUORUM disk mounted, once the quorum file ([000000]QUORUM.DAT) has been successfully created, but to CREATE the quorum file after the initial cluster configuration, the quorum disk MUST be mounted system-wide on one of the quorum disk watcher nodes at least once (with the cluster up and running without the quorum disk votes, i.e.&amp;nbsp;QF_VOTE=NO)&lt;/P&gt;&lt;P&gt;Volker.&lt;/P&gt;</description>
      <pubDate>Sat, 24 Sep 2016 08:16:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-node-hangs-when-another-node-shutdown/m-p/6901736#M103987</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2016-09-24T08:16:52Z</dc:date>
    </item>
  </channel>
</rss>

