<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic TCPIP setup for high-availability Itanium cluster in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-setup-for-high-availability-itanium-cluster/m-p/3668488#M72497</link>
    <description>Hi all,&lt;BR /&gt;Greetings from sunny New Zealand.&lt;BR /&gt;I'm looking for suggestions and input for a Multi-Site cluster configuration. There are three nodes; 2xRX2620s, each has 4 (yes, four - we didn't realize when ordering that they came with 2 on-board!) Gigabit ethernet ports; and an AlphaServer DS10. DS10 is there as a vote provider/tie-breaker only, all three servers are in different buildings. There are two networks between the servers; a public one (.10 subnet), and a private one. I will be using volume shadowing between the two rx2620s, so I ideally want as much cluster and volshad traffic as possible to go over the private network, but if necessary to fail over to the public. Multi-site clustering is required as this is a real 24x7 operation, and I can't let anything as minor as a plane crash, meteorite strike or civil unrest stop production.&lt;BR /&gt;I've been looking at load brokering, failSAFE etc, but just wondered if anyone has set up a similar environment, and how they set up the IP addresses (the number of addresses is not an issue, by the way, I have my own subnet to play with), cluster aliases, public and private addresses etc.&lt;BR /&gt;VMS, TCP/IP versions all the latest; applications will include RDB and MessageQ.&lt;BR /&gt;[BTW I've already read Matt Muggeridge's excellent paper on high availability TCPIP.]&lt;BR /&gt;Thanks in advance.&lt;BR /&gt;Paul Jerrom.</description>
    <pubDate>Thu, 10 Nov 2005 03:03:22 GMT</pubDate>
    <dc:creator>Paul Jerrom</dc:creator>
    <dc:date>2005-11-10T03:03:22Z</dc:date>
    <item>
      <title>TCPIP setup for high-availability Itanium cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-setup-for-high-availability-itanium-cluster/m-p/3668488#M72497</link>
      <description>Hi all,&lt;BR /&gt;Greetings from sunny New Zealand.&lt;BR /&gt;I'm looking for suggestions and input for a Multi-Site cluster configuration. There are three nodes; 2xRX2620s, each has 4 (yes, four - we didn't realize when ordering that they came with 2 on-board!) Gigabit ethernet ports; and an AlphaServer DS10. DS10 is there as a vote provider/tie-breaker only, all three servers are in different buildings. There are two networks between the servers; a public one (.10 subnet), and a private one. I will be using volume shadowing between the two rx2620s, so I ideally want as much cluster and volshad traffic as possible to go over the private network, but if necessary to fail over to the public. Multi-site clustering is required as this is a real 24x7 operation, and I can't let anything as minor as a plane crash, meteorite strike or civil unrest stop production.&lt;BR /&gt;I've been looking at load brokering, failSAFE etc, but just wondered if anyone has set up a similar environment, and how they set up the IP addresses (the number of addresses is not an issue, by the way, I have my own subnet to play with), cluster aliases, public and private addresses etc.&lt;BR /&gt;VMS, TCP/IP versions all the latest; applications will include RDB and MessageQ.&lt;BR /&gt;[BTW I've already read Matt Muggeridge's excellent paper on high availability TCPIP.]&lt;BR /&gt;Thanks in advance.&lt;BR /&gt;Paul Jerrom.</description>
      <pubDate>Thu, 10 Nov 2005 03:03:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-setup-for-high-availability-itanium-cluster/m-p/3668488#M72497</guid>
      <dc:creator>Paul Jerrom</dc:creator>
      <dc:date>2005-11-10T03:03:22Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP setup for high-availability Itanium cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-setup-for-high-availability-itanium-cluster/m-p/3668489#M72498</link>
      <description>Paul,&lt;BR /&gt;&lt;BR /&gt;forget about IP clustering, as that is just a crude Unixian failover with ALL cluster connection to exactly ONE node, and failover only upon node failure.&lt;BR /&gt;Use DNS round-robin or (preferably) metric+loadbroker.&lt;BR /&gt;&lt;BR /&gt;Have each pair of nodes connected to the others over at least two GEOGRAPHICALLY INDEPENDENT network lines.&lt;BR /&gt;&lt;BR /&gt;For 24 x 7, or even 24 x 365.25 operation, you probably also need some way of rolling upgrade of your applics.&lt;BR /&gt;&lt;BR /&gt;We implement this by having a separate service name for each application, which is devided by round-robin over every node (normally 4 for us) that offers that service.&lt;BR /&gt;Planned upgrades are done by taking one node for that service out of the round-robin.&lt;BR /&gt;User sessions are limited to 10 hours, so after that the node is free of that app.&lt;BR /&gt;On THAT node we perform the upgrade, and the verification. If all is OK, we move the app over the the new-node version. Depending on the acceptability of simultanuous running of the versions, we do or do not kill the user session on the old version. After all old-version sessions are gone, we re-install round-robin. &lt;BR /&gt;This allows for NO (simultanuous multiversion allowed) or minimal (break running sessions &amp;amp; restart for no multiversion) applic interrupion.&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 10 Nov 2005 07:34:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-setup-for-high-availability-itanium-cluster/m-p/3668489#M72498</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-11-10T07:34:35Z</dc:date>
    </item>
    <item>
      <title>Re: TCPIP setup for high-availability Itanium cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/tcpip-setup-for-high-availability-itanium-cluster/m-p/3668490#M72499</link>
      <description>&lt;BR /&gt;Hello from San Diego.&lt;BR /&gt;&lt;BR /&gt;You can use SCACP to set priority for cluster traffic.  You don't need to configure anything, out of the box this just works for a network interface cluster.  &lt;BR /&gt;&lt;BR /&gt;For the two networks, you should make sure that there is redundant physical networking equipment supporting connectivity.  Two VLANs on the same switch are not redundant.  &lt;BR /&gt;&lt;BR /&gt;I would consider using LAN failover on your public network, you can combine LAN failover and failSAFE IP.   I'd want to have a "service" address or addresses and a dedicated management address for each system.&lt;BR /&gt;&lt;BR /&gt;As Jan says pass on cluster alias.&lt;BR /&gt;&lt;BR /&gt;What sort of storage are you planning?  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Andy</description>
      <pubDate>Thu, 10 Nov 2005 11:26:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/tcpip-setup-for-high-availability-itanium-cluster/m-p/3668490#M72499</guid>
      <dc:creator>Andy Bustamante</dc:creator>
      <dc:date>2005-11-10T11:26:25Z</dc:date>
    </item>
  </channel>
</rss>

