<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: rx2620 - network connections for Cluster/SCS and Decnet in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781291#M76306</link>
    <description>Hey guys, THANKS a TON!!!&lt;BR /&gt;&lt;BR /&gt;Ive read thru the LAN failover and failSAFE IP documentation/links youve given. thx.&lt;BR /&gt;&lt;BR /&gt;For reference on cabling, it makes sense to me to hook the 2 nodes directly together with both the copper GB and 1 fiber port.  And then the copper 10/100 and the other Fiber port on each node to diff switches with the 2 fibers on a single switch and the coppers on a diff switch.&lt;BR /&gt;&lt;BR /&gt;My emphasis here is on simplicity - I wont be around, and there will be others managing this cluster, and Im not a big net-guy myself.&lt;BR /&gt;&lt;BR /&gt;So, I have 3 things to deal with - IP failsafe, LAN failover and Decnet IV.   SCS will take care of itself and I was planning on NOT using SCACP to control that just to keep things simple and let it choose what it wants.   I will have no performance/bandwidth issues with any of this.&lt;BR /&gt;&lt;BR /&gt;I want the simplest most easily manageable config that will give IP and Decnet failover capability.  I dont mind if only SCS runs over the direct node-2-node connection and Decnet/IP both only go thru the switches as long as the configging is easily dealt with for these folks.&lt;BR /&gt;&lt;BR /&gt;FailSAFE seems more complicated than LAN failover, and I believe I can cover both IP and DECNET with LAN failover.  What do you guys prefer for simplicity?&lt;BR /&gt;&lt;BR /&gt;PLUS if I use failSAFE then I ALSO have to config either Decnet or LAN failover to cover Decnet.   &lt;BR /&gt;&lt;BR /&gt;I dont need a cluster alias for IP.  The application isnt clusterable. They run a primary and a hot standby node   I only want failover INSIDE each node, not cluster wide.   Im leaning toward ONLY using LAN failover.&lt;BR /&gt;&lt;BR /&gt;In addition to simplicity, of course I want reliability (which are often related, hahaha).&lt;BR /&gt;&lt;BR /&gt;I also need this pretty quick and Im having trouble deciding which route to take. Any preferences/ideas guys?&lt;BR /&gt;&lt;BR /&gt;Thanks AGAIN!!!&lt;BR /&gt;Tom&lt;BR /&gt;</description>
    <pubDate>Fri, 05 May 2006 15:01:09 GMT</pubDate>
    <dc:creator>Thomas Griesan</dc:creator>
    <dc:date>2006-05-05T15:01:09Z</dc:date>
    <item>
      <title>rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781281#M76296</link>
      <description>Hello people!&lt;BR /&gt;&lt;BR /&gt;We have a 2 node cluster (two Itanium rx2620 each with a dual port fiber NIC).   These boxes also have 2 copper net connections each.  Quorum disk is on the MSA1000. &lt;BR /&gt;&lt;BR /&gt;What I’d like to do is hook these boxes together directly with a crossover cable (i.e. not to a switch) for cluster/SCS and/or Decnet traffic and also connect to the ethernet to redundant switches for IP.  I only need Decnet BETWEEN the two nodes (yes, I know that is a bit odd ;-).&lt;BR /&gt;&lt;BR /&gt;1. Is it cool to hook these severs together directly and what ramifications may I have?&lt;BR /&gt;2. Which ports should I use for the best reliability and performance?&lt;BR /&gt;3. How do I config decent to use the direct connection between the nodes?&lt;BR /&gt;4. Do I need something like FailSAFE to make Decent failover if one connection is down?&lt;BR /&gt;&lt;BR /&gt;For example, I could use fiber to hook the boxes together with one or both ports, or I could use copper or any combination thereof.  I have 4 net connections on each box so there are a number of possibilities.&lt;BR /&gt;&lt;BR /&gt;Could I, or should I, even hook both fiber ports on each machine together and run SCS over one and Decnet over the other?   I realize SCS will decide on its own what connection it will use.&lt;BR /&gt;&lt;BR /&gt;I don't have to use all the ports, obviously, just looking for reliability and performance.&lt;BR /&gt;&lt;BR /&gt;Thanks a ton!!!&lt;BR /&gt;Tom&lt;BR /&gt;</description>
      <pubDate>Tue, 02 May 2006 18:12:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781281#M76296</guid>
      <dc:creator>Thomas Griesan</dc:creator>
      <dc:date>2006-05-02T18:12:43Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781282#M76297</link>
      <description>&lt;BR /&gt;A cross over is one of the simpliest most reliable cluster interconnects.  Plug it and, make sure the speed/duplex is correct and be happy.&lt;BR /&gt;&lt;BR /&gt;There's a lot of "it depends" here on the remaining questions.  With two dual ported NICs I'd look at band width required.  DECnet Phase IV or V?  It sounds like you have redundant switching available, I'd want to use both NICs.  Does your switching support fiber?  &lt;BR /&gt;&lt;BR /&gt;LAN failover is configured with the LANCP utility, see System Manager's Manual v 2, chapter 10.  &lt;A href="http://h71000.www7.hp.com/doc/82FINAL/aa-pv5nj-tk/aa-pv5nj-tk.HTMl" target="_blank"&gt;http://h71000.www7.hp.com/doc/82FINAL/aa-pv5nj-tk/aa-pv5nj-tk.HTMl&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Configure DECnet IV with sys$manager:netconfig.com, DECnet IV with  sys$manager:net$configure.com.    &lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/" target="_blank"&gt;http://h71000.www7.hp.com/doc/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;You could configure 2 LAN failover devices, one node to node, the second to your switching.  For reliablity, I'd want to allocate one port from each NIC to each failover device.  &lt;BR /&gt;&lt;BR /&gt;SCS, cluster traffic will just work.  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Welcome to the VMS forum.  &lt;BR /&gt;&lt;BR /&gt;Andy</description>
      <pubDate>Tue, 02 May 2006 19:20:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781282#M76297</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2006-05-02T19:20:32Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781283#M76298</link>
      <description>Thomas, since 7.3-2 SCS traffic is no longer dedicated to a single network path. The PE driver will us all available network paths, but will perfer the path with the best throughput.&lt;BR /&gt;Have lots of inter connects and and you will have plenty of interconnect redundancy. IMO with the perfomance of network switches you could have one logical gigabit ethernet for all you cluster and network communcations. Physically it would be configured to avoid single points of failure due to hard problems.&lt;BR /&gt;The days of coaxial able providing cluster interconnects are not missed.&lt;BR /&gt;My AUS 2 cents.</description>
      <pubDate>Wed, 03 May 2006 01:16:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781283#M76298</guid>
      <dc:creator>Thomas Ritter</dc:creator>
      <dc:date>2006-05-03T01:16:05Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781284#M76299</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;re: LAN Failover&lt;BR /&gt;&lt;BR /&gt;LAN Failover is not supported for point-to-point connections. Refer to 'LAN Failover restrictions' in the manual cited above.&lt;BR /&gt;&lt;BR /&gt;Maybe use LAN failover (one port from each NIC) to connect to your switches.&lt;BR /&gt;&lt;BR /&gt;Use the point-to-point connections for SCS and DECnet. Multicircuit end nodes should work and may even do load sharing.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Wed, 03 May 2006 01:50:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781284#M76299</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-05-03T01:50:03Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781285#M76300</link>
      <description>I don't think you need lan failover. Connect the copper lan connections using a crossover. Cluster traffic will use all or you can use SCACP to set a management priority so it prefers to use certain connections but will use the others if necessary. Configure DECnet to know about the two copper connections. It will either use both or failover (I forget which I think the behavior is different with DECnet/Plus and DECnet Phase IV).&lt;BR /&gt;</description>
      <pubDate>Wed, 03 May 2006 03:10:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781285#M76300</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-05-03T03:10:38Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781286#M76301</link>
      <description>Thanks everyone!!!&lt;BR /&gt;&lt;BR /&gt;Answers: Decent Phase IV.  No serious bandwidth/traffic requirements. Yes, redundant switching is available. BTW, this is 8.2 of VMS.&lt;BR /&gt;&lt;BR /&gt;The comments pretty much confirm my hopes/plans.  &lt;BR /&gt;&lt;BR /&gt;I'm thinking to hook the fiber NICs directly together with one port and also hook the copper NICs directly together with a copper crossover.&lt;BR /&gt;&lt;BR /&gt;Then put both node's remaining copper port to one switch and both node's remaining fiber port to another switch.&lt;BR /&gt;&lt;BR /&gt;SCS will take care of itself as folks say, so that leaves me with configuring Decnet IV and TCPIP.  As you can tell, I'm not a big network guy.  ;-)  &lt;BR /&gt;&lt;BR /&gt;I need to go figure out how to check the speed/duplex settings.  This is all stock stuff so I presume everything is set the same.  Does it need to be full duplex?  DUH.  ;-)   I read that with gigabit, that auto-negotiate on the speed is now recommended.&lt;BR /&gt;&lt;BR /&gt;I would want decnet to pick one or both of the direct connections first, then the ethernet/switch route.&lt;BR /&gt;&lt;BR /&gt;I would want IP to pick fiber first, then copper to/from the switch.&lt;BR /&gt;&lt;BR /&gt;This is really ULTRA redundant and probably overkill but we might as well use what we have.&lt;BR /&gt;&lt;BR /&gt;Thanks for the welcome Andy.  I was a long time Deccie.  17 years managing the CSCTLS (Champ/CSC) VMS cluster in the Customer Support Center in Colorado Springs.  Then I got layed off - BIG SURPRISE, hahahahaha...  I'm on a little one-month contract doing some VMS sys admin, then its back to layoff limbo.  ;-)&lt;BR /&gt;&lt;BR /&gt;Ahhh, the beauty of being a nearly extinct dinosaur.  ;-)  Gimme a holler if you need a big ole useless lizard.  ;-)  &lt;BR /&gt;&lt;BR /&gt;VMS, RDB, ACMS, SQL, 3GL languages - you know the drill.  griesantomjean@msn.com  (719)632-6565&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 03 May 2006 10:39:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781286#M76301</guid>
      <dc:creator>Thomas Griesan</dc:creator>
      <dc:date>2006-05-03T10:39:20Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781287#M76302</link>
      <description>The RX series will do auto-negotiate. I don't know of a way to lock the speed / duplex at EFI console level. Use LANCP DEFINE DEVICE instead.&lt;BR /&gt;&lt;BR /&gt;Gigabit Ethernet devices are best left at auto-negotiate. Fibre GigE direct to Fibre GigE works fine in auto (or at least it does for the GigE NICs in a pair of DS25s that are directly connected to each other).&lt;BR /&gt;&lt;BR /&gt;LANCP should let you lock the speed &amp;amp; duplex on non-GigE devices. A cross-over cable should be fine, but YMMV.&lt;BR /&gt;&lt;BR /&gt;DECnet-Plus will do a much better job of multiple paths than Phase IV. You'll get load balancing and path failover for the price of the end-node licence. Set Phase V up to use the two paths you want and make sure that they are truly separate, then use Phase IV style addressing on both.&lt;BR /&gt;&lt;BR /&gt;See &lt;A href="http://h71000.www7.hp.com/openvms/journal/v5/index.html#decnet" target="_blank"&gt;http://h71000.www7.hp.com/openvms/journal/v5/index.html#decnet&lt;/A&gt; for some of the DECnet background.&lt;BR /&gt;&lt;BR /&gt;failsafe IP will let an IP address migrate from one NIC to the other. Set the two IP NICs up with their own specific addresses, then use failsafe IP to manage the "service IP addresses" that people will connect to. Don't think in terms of a single IP address any more - think in terms of one (or more) IP addresses per service that your systems are offering.&lt;BR /&gt;&lt;BR /&gt;See &lt;A href="http://h71000.www7.hp.com/openvms/journal/v2/articles/tcpip.pdf" target="_blank"&gt;http://h71000.www7.hp.com/openvms/journal/v2/articles/tcpip.pdf&lt;/A&gt; for more info on IP.&lt;BR /&gt;&lt;BR /&gt;Remember to disable SCS on adapters where it's not needed, such as the main network. Use SCACP to control SCS.&lt;BR /&gt;&lt;BR /&gt;Given the choice I'd use GigE for the dual SCS connection between the machines, plus DECnet and use the others for IP connectivity to the outside world. Worth configuring silly IP addresses on the cross-linked pair too, if only for someone to be able to use PING as a test.&lt;BR /&gt;&lt;BR /&gt;Any other protocols in use such (eg: LAT)? If so then control which adapters they start up on too.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Cheers, Colin.</description>
      <pubDate>Thu, 04 May 2006 03:40:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781287#M76302</guid>
      <dc:creator>Colin Butcher</dc:creator>
      <dc:date>2006-05-04T03:40:09Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781288#M76303</link>
      <description>OT - Thomas, you may wish to post your CV to&lt;BR /&gt;&lt;A href="http://www.openvms.org/phorum/list.php?3" target="_blank"&gt;http://www.openvms.org/phorum/list.php?3&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 04 May 2006 04:16:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781288#M76303</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-05-04T04:16:26Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781289#M76304</link>
      <description>&lt;BR /&gt;Re: Colin&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;The RX series will do auto-negotiate. I don't know of a way to lock the speed / duplex at EFI console level. Use LANCP DEFINE DEVICE instead.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;Please see the discussion on settings of the network devices in thread&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=675002" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=675002&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Last year, at the Bootcamp, I was told (I think it was Andy, but I'm not sure) that the only way to set the speed and mode of the network devices on an RX-series is using the LANCP utility (at least with VMS).&lt;BR /&gt;&lt;BR /&gt;FWIW,&lt;BR /&gt;Kris (aka Qkcl)</description>
      <pubDate>Thu, 04 May 2006 04:42:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781289#M76304</guid>
      <dc:creator>Kris Clippeleyr</dc:creator>
      <dc:date>2006-05-04T04:42:02Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781290#M76305</link>
      <description>Tom,&lt;BR /&gt;&lt;BR /&gt;To embroider slightly on Colin's comment. "Silly" should mean an RFC 1918 (intranet) address block that the site is NOT using. Thus, you will not end up with a addressing or routing problems vis a vis the intranet or the internet in the future.&lt;BR /&gt;&lt;BR /&gt;I also agree with Andy, there is little that can go wrong with a null modem (reversal) cable. The only failures in a steady state are broken connectors. Vermin nibbling on the cable can also be a problem (up to and including fork lifts and backhoes).&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Thu, 04 May 2006 04:55:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781290#M76305</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-05-04T04:55:41Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781291#M76306</link>
      <description>Hey guys, THANKS a TON!!!&lt;BR /&gt;&lt;BR /&gt;Ive read thru the LAN failover and failSAFE IP documentation/links youve given. thx.&lt;BR /&gt;&lt;BR /&gt;For reference on cabling, it makes sense to me to hook the 2 nodes directly together with both the copper GB and 1 fiber port.  And then the copper 10/100 and the other Fiber port on each node to diff switches with the 2 fibers on a single switch and the coppers on a diff switch.&lt;BR /&gt;&lt;BR /&gt;My emphasis here is on simplicity - I wont be around, and there will be others managing this cluster, and Im not a big net-guy myself.&lt;BR /&gt;&lt;BR /&gt;So, I have 3 things to deal with - IP failsafe, LAN failover and Decnet IV.   SCS will take care of itself and I was planning on NOT using SCACP to control that just to keep things simple and let it choose what it wants.   I will have no performance/bandwidth issues with any of this.&lt;BR /&gt;&lt;BR /&gt;I want the simplest most easily manageable config that will give IP and Decnet failover capability.  I dont mind if only SCS runs over the direct node-2-node connection and Decnet/IP both only go thru the switches as long as the configging is easily dealt with for these folks.&lt;BR /&gt;&lt;BR /&gt;FailSAFE seems more complicated than LAN failover, and I believe I can cover both IP and DECNET with LAN failover.  What do you guys prefer for simplicity?&lt;BR /&gt;&lt;BR /&gt;PLUS if I use failSAFE then I ALSO have to config either Decnet or LAN failover to cover Decnet.   &lt;BR /&gt;&lt;BR /&gt;I dont need a cluster alias for IP.  The application isnt clusterable. They run a primary and a hot standby node   I only want failover INSIDE each node, not cluster wide.   Im leaning toward ONLY using LAN failover.&lt;BR /&gt;&lt;BR /&gt;In addition to simplicity, of course I want reliability (which are often related, hahaha).&lt;BR /&gt;&lt;BR /&gt;I also need this pretty quick and Im having trouble deciding which route to take. Any preferences/ideas guys?&lt;BR /&gt;&lt;BR /&gt;Thanks AGAIN!!!&lt;BR /&gt;Tom&lt;BR /&gt;</description>
      <pubDate>Fri, 05 May 2006 15:01:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781291#M76306</guid>
      <dc:creator>Thomas Griesan</dc:creator>
      <dc:date>2006-05-05T15:01:09Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781292#M76307</link>
      <description>Tom,&lt;BR /&gt;&lt;BR /&gt;LAN Failover is easy to set up and will protect all network protocols running over the LLc: device from physical failures of the network link (between NW interface and switch port). Only a LINK DOWN will trigger a LAN failover. If the switch would somehow fail and not forward packets anymore, this would not be detected by LLDRIVER.&lt;BR /&gt;&lt;BR /&gt;FailSAFE IP will monitor the bytes received counter of the LAN interface and act upon the fact, that no more bytes are received. This may detect other failure scenarios as well, but it only covers the IP stack from LAN interface failure.&lt;BR /&gt;&lt;BR /&gt;I would vote for LAN failover.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Sat, 06 May 2006 03:04:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781292#M76307</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-05-06T03:04:14Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781293#M76308</link>
      <description>I'm going to use Lan Failover to keep things simple.&lt;BR /&gt;&lt;BR /&gt;I've read up on it, and also tcpip and decnet and I'm still digesting how I will point these two at the logical LLA0 device that lan failover creates.  This is a production system and I can't play around and I'd prefer not to stop the network, but it appears I will have to in order to config.&lt;BR /&gt;&lt;BR /&gt;My plan is to remove the ipconfig settings for my current NIC and then hope that TCPIP$CONFIG.COM will see the new logical LLA0 device all allow me to point the IP address to that.&lt;BR /&gt;&lt;BR /&gt;Thanks again!!&lt;BR /&gt;Tom</description>
      <pubDate>Wed, 10 May 2006 11:52:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781293#M76308</guid>
      <dc:creator>Thomas Griesan</dc:creator>
      <dc:date>2006-05-10T11:52:17Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781294#M76309</link>
      <description>Tom,&lt;BR /&gt;&lt;BR /&gt;to add LAN devices into a LAN failover set, all protocols running on those LAN devices have to be stopped (including SCS). It might be 'tricky', to attempt this in the running system.&lt;BR /&gt;&lt;BR /&gt;You could make all the necessary definitions in the config files on the system disk and activate LAN failover with a single reboot:&lt;BR /&gt;&lt;BR /&gt;$ MC LANCP&lt;BR /&gt;LANCP&amp;gt; DEFINE DEVICE LLA/FAILOVER=(EWA,EWB)/ENABLE&lt;BR /&gt;&lt;BR /&gt;$ TCPIP SHOW CONF INT&lt;BR /&gt;Note settings&lt;BR /&gt;&lt;BR /&gt;$ TCPIP SET CONF NOINT WE0 ! or similar&lt;BR /&gt;$ TCPIP SET CONF INT LE0/... ! same as old int&lt;BR /&gt;&lt;BR /&gt;For DECnet-OSI EDIT NET$CSMACD_STARTUP.NCL and change to ... COMMUNICATION PORT = LLA&lt;BR /&gt;&lt;BR /&gt;For DECnet Phase IV it's NCP DEFINE LINE LLA-0 STATE ON, NCP DEF CIRC LLA-0 STATE ON and use characteristics of existing line/circuit.&lt;BR /&gt;&lt;BR /&gt;If you are running additional protocols on your LAN devices (SDA&amp;gt; SHOW LAN), you might also switch some of them to use LLA. Most network protocols have mechanisms (e.g. logicals to be set) to force them to use a specific LAN device).&lt;BR /&gt;&lt;BR /&gt;To use the standard config procedures, you have to create the LLA device first in the running system. For DECnet, you have to use the @NET$CONFIG ADVANCED option to be able to specify the LLA device to be used.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Wed, 10 May 2006 12:24:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781294#M76309</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-05-10T12:24:22Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781295#M76310</link>
      <description>Awesome Volker - THANKS!!!!  I spent a lot of time reading on this, trust me.  ;-)  It's nowhere out there in the docs, forums or on the web,&lt;BR /&gt;&lt;BR /&gt;What you said is basically what I surmised.&lt;BR /&gt;&lt;BR /&gt;I assume you meant:&lt;BR /&gt;&lt;BR /&gt;$ TCPIP SET CONF INT LLA0/... ! same as old int&lt;BR /&gt;&lt;BR /&gt;Instead of .... "LE0" ....&lt;BR /&gt;&lt;BR /&gt;BTW, from ana/sys I have ARP running which I've not heard of (that I recall) and both IP and IPV6:&lt;BR /&gt;&lt;BR /&gt;EIA5     868BD340  Eth   08-00           IP      0015 STRTN,UNIQ,STRTD&lt;BR /&gt;EIA6     868BDD40  Eth   08-06           ARP     0015 STRTN,UNIQ,STRTD&lt;BR /&gt;EIA7     868BE800  Eth   86-DD           IPV6    0015 STRTN,UNIQ,STRTD&lt;BR /&gt;&lt;BR /&gt;My hunch is Decnet will be easy on the running system, but for IP I'll have to set host from the other node and config IP and restart LANACP.  If it doesn't work I'll have to use the console and reboot.&lt;BR /&gt;&lt;BR /&gt;Shoot, if I hadn't closed the thread I could give you more points...sorry!!&lt;BR /&gt;&lt;BR /&gt;Thanks TONS again!!!&lt;BR /&gt;Tom&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 10 May 2006 12:48:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781295#M76310</guid>
      <dc:creator>Thomas Griesan</dc:creator>
      <dc:date>2006-05-10T12:48:14Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781296#M76311</link>
      <description>Tom,&lt;BR /&gt;&lt;BR /&gt;"LE0" is the correct device name syntax for TCPIP - I checked my example config from our rx2600 when doing E8.2 fieldtest !&lt;BR /&gt;&lt;BR /&gt;ARP is the TCPIP Address Resolution Protocol and will not need any extra handling, same for IPV6.&lt;BR /&gt;&lt;BR /&gt;I still think it would be hard to active the LLA device in the running system without a reboot. But you could start with just one LAN interface (without ANY protocol active and connected to your switch) and put it in a LAN failover set as a single LAN device (this works). But as soon as you would change DECnet to use that device, you might have a duplicate MAC address problem...&lt;BR /&gt;&lt;BR /&gt;I vote for preparing everything in the config files and a quick reboot.&lt;BR /&gt;&lt;BR /&gt;Volker.&lt;BR /&gt;&lt;BR /&gt;PS: You should be able to re-open a thread, if you want to ;-)</description>
      <pubDate>Wed, 10 May 2006 13:08:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781296#M76311</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-05-10T13:08:43Z</dc:date>
    </item>
    <item>
      <title>Re: rx2620 - network connections for Cluster/SCS and Decnet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781297#M76312</link>
      <description>Tom,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt; each with a dual port fiber NIC). These boxes also have 2 copper net connections each&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;So, _IF_ it would be inconvenient to shutdown/reboot your cluster, then, you _DO_ have a configuration that _WILL_ be able to stay up.&lt;BR /&gt;It depends on the relative importance of staying up vs extra effort, but if you (temporarily) route all network traffic through the copper, you can reconfig the Fiber NICs, and then re-rout over the fail-safe pseudo device.&lt;BR /&gt;No worry about SCS - it will use ANY available connection.&lt;BR /&gt;&lt;BR /&gt;The net effect will be some performance degradation, and temporarily loss of redundancy.&lt;BR /&gt;&lt;BR /&gt;It can be done, we did it.&lt;BR /&gt;&lt;BR /&gt;hth&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me (maybe at the Bootcamp in Nashua?)&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Wed, 10 May 2006 13:53:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rx2620-network-connections-for-cluster-scs-and-decnet/m-p/3781297#M76312</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-05-10T13:53:28Z</dc:date>
    </item>
  </channel>
</rss>

