<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Cluster heartbeat in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219913#M47053</link>
    <description>Uwe,&lt;BR /&gt;I've forwarded your thanks to Verell Boaen, who leads the team handling PEDRIVER.</description>
    <pubDate>Fri, 19 Mar 2004 12:26:03 GMT</pubDate>
    <dc:creator>Keith Parris</dc:creator>
    <dc:date>2004-03-19T12:26:03Z</dc:date>
    <item>
      <title>Cluster heartbeat</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219908#M47048</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I've created a two node cluster, each have 1 FDDI and 2 1Gb Ethernet NICs.&lt;BR /&gt;&lt;BR /&gt;The cluster is up and running and now I trying to find out which of the NICs are responsable for the Cluster heartbeat if I remove the FDDI ring.&lt;BR /&gt;&lt;BR /&gt;Can any one help me in this matter?&lt;BR /&gt;&lt;BR /&gt;Regards, Torfi.</description>
      <pubDate>Tue, 16 Mar 2004 07:07:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219908#M47048</guid>
      <dc:creator>Torfi Olafur Sverrisson_1</dc:creator>
      <dc:date>2004-03-16T07:07:47Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster heartbeat</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219909#M47049</link>
      <description>Torfi,&lt;BR /&gt;Could you give little more detail on the type of your cluster or your cluster setup/configuration.&lt;BR /&gt;&lt;BR /&gt;Since its a 2-node cluster, i bet you have a quorum disk? Am i right in concluding this? &lt;BR /&gt;&lt;BR /&gt;It should not be an issue for you to remove the FDDI loop, how ever i would need more details before i can confirm on this :-)&lt;BR /&gt;&lt;BR /&gt;What is your intended use of the FDDI loop?&lt;BR /&gt;&lt;BR /&gt;I am sorry, i am asking more questions rather than replying. I just wanted to get as many details as possible and throw this out to every one here, so that they can all have their perspective :-)&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Mobeen</description>
      <pubDate>Tue, 16 Mar 2004 07:18:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219909#M47049</guid>
      <dc:creator>Mobeen_1</dc:creator>
      <dc:date>2004-03-16T07:18:28Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster heartbeat</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219910#M47050</link>
      <description>If you want to see if the Lavc protocol (60-07) is running on all cards (Fddi, and the 2 Nic) the simplest way is &lt;BR /&gt;ana/sys&lt;BR /&gt;sh lan&lt;BR /&gt;and see if you have 60-07 and Lavc foor each card.&lt;BR /&gt;&lt;BR /&gt;by default, the Cluster protocol runs on all interfaces.&lt;BR /&gt;You can stop it with the programs in sys$examples:lavc$start_bus and lavc$stop_bus. &lt;BR /&gt;if you have Vms 7.3 or later, you have a better way, play with priorities with &lt;BR /&gt;$ mc scacp&lt;BR /&gt;set lan_device ewa/prio=4&lt;BR /&gt;set lan-device ewb/prio=2&lt;BR /&gt;&lt;BR /&gt;or just stop the protocol for certain cards&lt;BR /&gt;$ mc scacp&lt;BR /&gt;stop lan_device ewa</description>
      <pubDate>Tue, 16 Mar 2004 07:29:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219910#M47050</guid>
      <dc:creator>labadie_1</dc:creator>
      <dc:date>2004-03-16T07:29:08Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster heartbeat</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219911#M47051</link>
      <description>It would be helpful to know what OpenVMS version you are running. There were a lot of changes to PEDRIVER at 7.3 and above. Given you have Gigabit Ethernet, I'll assume you're most likely at 7.3 or above.&lt;BR /&gt;&lt;BR /&gt;(BTW, there is much more complex communication between OpenVMS Cluster nodes than a simple heartbeat as one might see in a simple fail-over cluster under Windows or UNIX, and this communication includes such things as distributed lock manager traffic.)&lt;BR /&gt;&lt;BR /&gt;PEDRIVER is the portion of OpenVMS code which handles inter-node communications in a cluster over a LAN.&lt;BR /&gt;&lt;BR /&gt;Based on packet loss history, capacity, and latency, PEDRIVER selects a set of optimal channels to a remote node, which is called the Equivalent Channel Set, and transmits round-robin across all of the channels in the ECS. (At the same time, it can receive on all channels, whether they're in the ECS for transmitting or not.) This is documented in Appendix G of the OpenVMS Cluster Systems Manual at &lt;A href="http://h71000.www7.hp.com/doc/731FINAL/4477/4477pro_034.html#congestion_appendix" target="_blank"&gt;http://h71000.www7.hp.com/doc/731FINAL/4477/4477pro_034.html#congestion_appendix&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;To easily see which path(s) PEDRIVER is using to transmit from each node to each of the other nodes at a given point in time, run the SHOW_PATHS_ECS.COM procedure from the [KP_CLUSTERTOOLS] directory on the V6 Freeware CD that shipped with 7.3-2 and is also available on the Web at &lt;A href="http://h71000.www7.hp.com/openvms/freeware/" target="_blank"&gt;http://h71000.www7.hp.com/openvms/freeware/&lt;/A&gt;</description>
      <pubDate>Wed, 17 Mar 2004 12:54:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219911#M47051</guid>
      <dc:creator>Keith Parris</dc:creator>
      <dc:date>2004-03-17T12:54:30Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster heartbeat</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219912#M47052</link>
      <description>Keith,&lt;BR /&gt;the additions to PEDRIVER (it now reports excessive packet loss) have been very useful during my last mission. We had attempted to migrate from 10MBit to 100MBit at a customer's site. After the message I used good old DECnet's DTSEND and 'MCR NCP SHOW KNOWN LINES COUNTERS' to diagnose everything - turned out to be a flow-control mismatch.&lt;BR /&gt;&lt;BR /&gt;Please forward my thanks to whoever has made the improvement!</description>
      <pubDate>Wed, 17 Mar 2004 13:07:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219912#M47052</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-03-17T13:07:27Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster heartbeat</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219913#M47053</link>
      <description>Uwe,&lt;BR /&gt;I've forwarded your thanks to Verell Boaen, who leads the team handling PEDRIVER.</description>
      <pubDate>Fri, 19 Mar 2004 12:26:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/cluster-heartbeat/m-p/3219913#M47053</guid>
      <dc:creator>Keith Parris</dc:creator>
      <dc:date>2004-03-19T12:26:03Z</dc:date>
    </item>
  </channel>
</rss>

