<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Partation for OpenVMS Cluster in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904833#M68793</link>
    <description>do you mean the cluster hung or did both nodes continue?&lt;BR /&gt;Is there a quorum disk? How many votes per node?</description>
    <pubDate>Wed, 01 Jun 2005 05:19:23 GMT</pubDate>
    <dc:creator>Ian Miller.</dc:creator>
    <dc:date>2005-06-01T05:19:23Z</dc:date>
    <item>
      <title>Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904832#M68792</link>
      <description>On my customer side, 2 ES40 and one RA3000 form a SCSI interconnected Cluster, two member connected used the NIC. Yesterday someone poweroff the Ethnet Switch, and I found the Cluster Partition occured, 2 ES40 wasn't together, they could just see itself,&lt;BR /&gt;from oracle, one rollback segment was conflicted, we couldn't use the SQLPLUS.&lt;BR /&gt;my question:&lt;BR /&gt;when the cluster partition orccured, should one member forced crash down? &lt;BR /&gt;&lt;BR /&gt;Charles</description>
      <pubDate>Wed, 01 Jun 2005 04:21:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904832#M68792</guid>
      <dc:creator>Song_Charles</dc:creator>
      <dc:date>2005-06-01T04:21:38Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904833#M68793</link>
      <description>do you mean the cluster hung or did both nodes continue?&lt;BR /&gt;Is there a quorum disk? How many votes per node?</description>
      <pubDate>Wed, 01 Jun 2005 05:19:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904833#M68793</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-06-01T05:19:23Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904834#M68794</link>
      <description>Charles,&lt;BR /&gt;&lt;BR /&gt;if you had a correctly configured 2-node cluster running, you cannot get a partitioned cluster by just interrupting the SCA network connection. Each node will time out the other node and remove it from the cluster after RECNXINTERVAL seconds. What happens then is depending on your VOTES and EXPECTED_VOTES setup. If both nodes have VOTES=1 and EXPECTED_VOTES=2, then both nodes will hang indefinitely...&lt;BR /&gt;&lt;BR /&gt;The dangerous moment could come, if you HALT and BOOT one (or both) of the systems. If both of them would have VOTES=1 and EXPECTED_VOTES=1, the partitioned cluster will be created and your data on the shared disks is in real danger of getting corrupted.&lt;BR /&gt;&lt;BR /&gt;Please describe your cluster config and provide the SYSGEN values for:&lt;BR /&gt;&lt;BR /&gt;VOTES, EXPECTED_VOTES, DISK_QUORUM, QDSKVOTES&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Wed, 01 Jun 2005 05:23:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904834#M68794</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-06-01T05:23:16Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904835#M68795</link>
      <description>Charles, the entire concept of votes, quorum and expected votes will prevent this from happening.&lt;BR /&gt;&lt;BR /&gt;If you have a two node cluster, give each node 1 votes, expected votes 3, and quorum disk 1 vote.  &lt;BR /&gt;&lt;BR /&gt;Then if a node is gone long enough, it will clue exit.&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Jun 2005 15:24:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904835#M68795</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-06-01T15:24:40Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904836#M68796</link>
      <description>Hi,&lt;BR /&gt;to configuat 2 nodes cluster, I just using the "CLUSTER_CONFIG" utility,&lt;BR /&gt;assign:&lt;BR /&gt;- SCSI connection YES&lt;BR /&gt;- CLUSTER NUMBER &amp;amp; PASSWORD&lt;BR /&gt;- sys$sysdevice is used a quorum disk,&lt;BR /&gt;- SCSI PORT ALLOCATION &amp;amp; CLUSTER ALLOCATION&lt;BR /&gt;without:&lt;BR /&gt;BOOT SERVER and DISK SERVER.&lt;BR /&gt;then AUTOGEN and reboot.&lt;BR /&gt;on other customer, I found the crash one member, while NIC connection broken.&lt;BR /&gt;I will check the CLUSTER paramters.&lt;BR /&gt;&lt;BR /&gt;T H A N K S&lt;BR /&gt;Charles&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Jun 2005 18:00:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904836#M68796</guid>
      <dc:creator>Song_Charles</dc:creator>
      <dc:date>2005-06-01T18:00:05Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904837#M68797</link>
      <description>While you can make the changes directly in sysgen&lt;BR /&gt;&lt;BR /&gt;$mcr sysgen&lt;BR /&gt;sysgen&amp;gt;use current&lt;BR /&gt;sysgen&amp;gt;set vote 1&lt;BR /&gt;sysgen&amp;gt;set expected 3&lt;BR /&gt;sysgen&amp;gt;set qdskvotes 1&lt;BR /&gt;sysgen&amp;gt;write current&lt;BR /&gt;$&lt;BR /&gt;Also, make sure the values for&lt;BR /&gt;disk_quorum&lt;BR /&gt;qdksvotes&lt;BR /&gt;vote   &lt;BR /&gt;&lt;BR /&gt;are set in modparams.dat.   Otherwise, you will lose the values when you run autogen. Upgrades for example, will automatically execute autogen.&lt;BR /&gt;&lt;BR /&gt;How do you know you are using the quorum disk?&lt;BR /&gt;&lt;BR /&gt;Issue the command&lt;BR /&gt;&lt;BR /&gt;$show cluster/continuous&lt;BR /&gt;&lt;BR /&gt;then, while it's displaying the results type&lt;BR /&gt;add qf_vote&lt;BR /&gt;&lt;BR /&gt;It will make a little box with a yes or no.&lt;BR /&gt;For that matter, so will add cluster&lt;BR /&gt;and then it will show the votes as well as the quorum disk vote.&lt;BR /&gt;&lt;BR /&gt;There is no reason you should ever have a partitioned cluster.  It will not happen if expected_votes = 3. &lt;BR /&gt;&lt;BR /&gt;Bob</description>
      <pubDate>Wed, 01 Jun 2005 18:32:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904837#M68797</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-06-01T18:32:55Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904838#M68798</link>
      <description>Hi Charles,&lt;BR /&gt;if I consider it right then the source of the problem is that you use the SYSTEM disk  as the quorum disk. In this case there is no chance to prevent a cluster partitioning case.&lt;BR /&gt;A valid minimum configuration consists of 2 nodes and an extra quorum disk or 3 nodes with the 3rd one having the quorum disk functionality or 2 nodes without any quorum disk where 1 node is the primary one which is only allowed to live/keeep on running in case of connectivity problems.&lt;BR /&gt;Cheers,&lt;BR /&gt;EW&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Jun 2005 19:17:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904838#M68798</guid>
      <dc:creator>Eberhard Wacker</dc:creator>
      <dc:date>2005-06-01T19:17:46Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904839#M68799</link>
      <description>Charles,&lt;BR /&gt;&lt;BR /&gt;there is nothing wrong with using the system disk as the quorum disk in a 2-node cluster. But you cannot specify SYS$SYSDEVICE as the DISK_QUORUM name, you MUST specify a physical disk device name and you must specify the SAME name on all cluster members (which have direct access to that disk).&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;on other customer, I found the crash one member, while NIC connection broken&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;This sounds like a possible reason for the partitioned cluster. If the cluster parameters are NOT correctly set up for this system, this can cause the crashed system to form it's own cluster when rebooting after the crash. So it's very likely, that the parameters of this system are WRONG.&lt;BR /&gt;&lt;BR /&gt;Please note, that there is also a strong advise to configure a separate (point-to-point) additional network link in this kind of configuration to prevent cluster connectivity problems, if the main network is disrupted.&lt;BR /&gt;&lt;BR /&gt;Volker.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 02 Jun 2005 02:18:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904839#M68799</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-06-02T02:18:37Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904840#M68800</link>
      <description>I'm sorry but we do need to add the reason we don't recommend the quorom disk be the system disk.  During heavy activity to the quorum disk, especially backup, you will not receive the acknowldgement messages in time.  Then you will get lost connection to quorum disk messages.  This is expected behavior.  &lt;BR /&gt;&lt;BR /&gt;There is a special problem if the quroum disk in on a san in a large cluster, it will respond before messages from the nodes on the network, and it is possible for a small part of the cluster to remain and the rest to clue exit.&lt;BR /&gt;&lt;BR /&gt;See the artical&lt;BR /&gt;Overview And Concerns Of Quorum Disks In A Cluster</description>
      <pubDate>Thu, 02 Jun 2005 07:28:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904840#M68800</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-06-02T07:28:44Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904841#M68801</link>
      <description>Hi, &lt;BR /&gt;- q_disk must be assign with the physical&lt;BR /&gt;  name, change the sys$sysdevice to DKB0, &lt;BR /&gt;  but the system disk for each member are  &lt;BR /&gt;  their local disk, maybe member 1 was &lt;BR /&gt;  DKB0, but member 2 was DKC0, could I  &lt;BR /&gt;  using DKB0 and DKC0 assign to each member?&lt;BR /&gt;&lt;BR /&gt;- on each ALPHA, there were 2 NIC, one for &lt;BR /&gt;  TCPIP, other maybe for DECNET, which NIC&lt;BR /&gt;  should be used as cluster connectivity?&lt;BR /&gt;  or if I had the third NIC, what should be &lt;BR /&gt;  setup for this NIC, IP or DECNET address &lt;BR /&gt;  should be assigned to the third?&lt;BR /&gt;&lt;BR /&gt;T H A N K S   A L L&lt;BR /&gt;Charles</description>
      <pubDate>Thu, 02 Jun 2005 17:12:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904841#M68801</guid>
      <dc:creator>Song_Charles</dc:creator>
      <dc:date>2005-06-02T17:12:20Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904842#M68802</link>
      <description>Charles,&lt;BR /&gt;&lt;BR /&gt;You must use a disk which is connected to both systems for quorum disk. So if you have two different local disks for system disks you could not use them for quorum disks.&lt;BR /&gt;&lt;BR /&gt;Cluster will chose the best NIC for its communication. If one goes bad or slow down it will change to the second.&lt;BR /&gt;&lt;BR /&gt;Bojan</description>
      <pubDate>Thu, 02 Jun 2005 17:39:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904842#M68802</guid>
      <dc:creator>Bojan Nemec</dc:creator>
      <dc:date>2005-06-02T17:39:36Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904843#M68803</link>
      <description>Charles,&lt;BR /&gt;&lt;BR /&gt;In this situation the only way to not get a split cluster is to add an exta NIC card dedicated for SCS, connection to each other without any switch/hub. (cross-ethernet cable). The nodes cannot be to far from each other because you are using SCSI. The reason is: The nodes always see the disks (using SCSI and RA3000) and SCS isn't going over SCSI. So this cannot be helped bij quorum disks or other options.&lt;BR /&gt;If you don't add the extra NIC card, the best option is to add 1 votes to one node and 1 to the other and get rid of the the quorum disk. In this case the cluster hangs until your network is working properly again (in this case: power on the switch).&lt;BR /&gt;Another option is to (still) get rid of the quorum disk, give one node 2 votes and the other 1 vote. So if the network disappears one node goes on, the other dies.&lt;BR /&gt;&lt;BR /&gt;AvR</description>
      <pubDate>Thu, 02 Jun 2005 22:11:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904843#M68803</guid>
      <dc:creator>Anton van Ruitenbeek</dc:creator>
      <dc:date>2005-06-02T22:11:56Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904844#M68804</link>
      <description>Charles,&lt;BR /&gt;&lt;BR /&gt;as I read this so far,&lt;BR /&gt;(ignore if I read wrong!!!)&lt;BR /&gt;you specify SYS$SYSDEVICE as the quorumdisk&lt;BR /&gt;_AND_&lt;BR /&gt;you are using different disks as the system disk for different systems.&lt;BR /&gt;&lt;BR /&gt;THIS COMBINATION IS ____DANGEROUS____!!!&lt;BR /&gt;&lt;BR /&gt;During normal operation, the different systems will __NOT__ see each othersÂ´ quorum disks, but they WILL each see their own.&lt;BR /&gt;THOSE are _NOT_ synchronised!&lt;BR /&gt;But, as long as the nodes see each other, the cluster wiil keep going, and synchronised.&lt;BR /&gt;Now, if the nodes loose connection, _EACH node will find that "_THE_" quorum disk is operational, and not seeing the other node eighther.&lt;BR /&gt;So, obviously, the other node is gone, and "I" (local node perspective) can continue.&lt;BR /&gt;But, if both nodes still can connect to (some of) the disks, _THAT_ access is allowed but uncoordinated.&lt;BR /&gt;THAT is the classical recipee for corrupted data, and THAT is why __ALL__ nodes __MUST__ specify __THE SAME__ __PHYSICAL__ disk!!&lt;BR /&gt;&lt;BR /&gt;SYS$SYSDEVICE is a REAL danger!!&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jp</description>
      <pubDate>Thu, 02 Jun 2005 22:23:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904844#M68804</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-06-02T22:23:11Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904845#M68805</link>
      <description>Charles, if you have an HP license I can send you some great papers on quorum disks.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;You can send it prively to robert.comarow@hp.com&lt;BR /&gt;Please include your access number.&lt;BR /&gt;&lt;BR /&gt;People often think they are using their quorum disk but aren't.&lt;BR /&gt;&lt;BR /&gt;As the previous poster correctly pointed out, they must be the same disk.  Access to the quorum disk will prevent partitioned clusters.&lt;BR /&gt;&lt;BR /&gt;We recommend highly not to use a system disk, but a less used disk to prevent lost connection to quorum disk messages.&lt;BR /&gt;&lt;BR /&gt;Adding qf_vote tp show cluster/cont will prove it. It will not use the quorum disk on the first boot however. I'll be glad to send you the white papers.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Proper use will prevent partitioned clusters.  Partitioned clusters will corrupt our disks.&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Jun 2005 05:33:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904845#M68805</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-06-03T05:33:21Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904846#M68806</link>
      <description>Hiï¼ &lt;BR /&gt;- We use the extra NIC with the cross-     &lt;BR /&gt;  ethernet cable for SCS, for some reason,  &lt;BR /&gt;  if the cable was broken, which member &lt;BR /&gt;  will be crashed? I think it is the best &lt;BR /&gt;  way to connect the cable to Ethernet   &lt;BR /&gt;  switch or hub.&lt;BR /&gt;- I will change the quorum disk to the &lt;BR /&gt;  share disk $1$dka0: on RA3000, with 1 &lt;BR /&gt;  vote on each member, is it right?&lt;BR /&gt;&lt;BR /&gt;T H A N K S&lt;BR /&gt;Char</description>
      <pubDate>Sat, 04 Jun 2005 20:10:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904846#M68806</guid>
      <dc:creator>Song_Charles</dc:creator>
      <dc:date>2005-06-04T20:10:33Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904847#M68807</link>
      <description>Charles,&lt;BR /&gt;if your server has two NICs, then you can VMScluster traffic on both of them.&lt;BR /&gt;&lt;BR /&gt;You give both cluster members and the quorum disk one vote and set EXPECTED_VOTES=3. Bob Comarow has already explained this a few days ago, please scroll a bit up ;-)</description>
      <pubDate>Sun, 05 Jun 2005 04:07:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904847#M68807</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-06-05T04:07:36Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904848#M68808</link>
      <description>The crossover cable is a great strategy. It will help cluster performance.  It it breaks, cluster performance will use the regular network.&lt;BR /&gt;&lt;BR /&gt;Using the utility &lt;BR /&gt;$mcr scacp&lt;BR /&gt;&lt;BR /&gt;you can set a higher priority for specific network card.&lt;BR /&gt;&lt;BR /&gt;This was introduced in VMS 7.3&lt;BR /&gt;&lt;BR /&gt;You whould be in 132 colum displays.  A great place to start is&lt;BR /&gt;mcr scacp&lt;BR /&gt;scacp&amp;gt;show channel&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;You cean evel tell your cluster not to use a port for scs communication, if you have a busy network.  That of course elimates failover.   Thus setting higher priority for preferred paths is usually the best course of action.&lt;BR /&gt;SCAP  will show the pathes and how well the channels are working.  Don't be afraid of some errors, they are normal.&lt;BR /&gt;&lt;BR /&gt;Bob</description>
      <pubDate>Sun, 05 Jun 2005 04:40:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904848#M68808</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-06-05T04:40:50Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904849#M68809</link>
      <description>Charles,&lt;BR /&gt;&lt;BR /&gt;what several previous posts did simply imply for those who already know, but noone said so explicitly:&lt;BR /&gt;&lt;BR /&gt;As long as the systems are "somehow" connected, cluster communication WILL continue.&lt;BR /&gt;Cluster communication will continue over ANY available path!&lt;BR /&gt;&lt;BR /&gt;Of course, there may be performance degadation if you loose the high capacity path, and all you are left with is, eg, a 10Mb ethernet, but as long as there IS ANY connection, you WILL continue.&lt;BR /&gt;&lt;BR /&gt;-- there ARE ways to exclude certain pathways, but that will have to be explicitly done, and you need very special circumstances for that to be advantaguous!&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;</description>
      <pubDate>Sun, 05 Jun 2005 10:02:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904849#M68809</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-06-05T10:02:36Z</dc:date>
    </item>
    <item>
      <title>Re: Partation for OpenVMS Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904850#M68810</link>
      <description>Yes,&lt;BR /&gt;&lt;BR /&gt;  Cluster communication will continue over ANY available path, but it will degress the performance while the cross-ethernet cable was broken.&lt;BR /&gt;&lt;BR /&gt;Very Thanks&lt;BR /&gt;&lt;BR /&gt;Charles</description>
      <pubDate>Mon, 06 Jun 2005 04:16:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/partation-for-openvms-cluster/m-p/4904850#M68810</guid>
      <dc:creator>Song_Charles</dc:creator>
      <dc:date>2005-06-06T04:16:54Z</dc:date>
    </item>
  </channel>
</rss>

