<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VMS Failover in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529058#M70640</link>
    <description>Kelly,&lt;BR /&gt;I think this should probably help you. Years back i was confronted with similar issue and this is what was the solution we adopted&lt;BR /&gt;&lt;BR /&gt;1. An application that was very critical &lt;BR /&gt;   required it to be tied to an IP address&lt;BR /&gt;   rather than a host name.&lt;BR /&gt;&lt;BR /&gt;2. I had 2 stand alone servers, one was live&lt;BR /&gt;   and the other backup.&lt;BR /&gt;&lt;BR /&gt;3. In efforts to fail over this application &lt;BR /&gt;   or switch the application from Server A&lt;BR /&gt;   to Server B, what we did was define&lt;BR /&gt;   a secondary IP address&lt;BR /&gt;&lt;BR /&gt;To give you little more details, each of my servers Serv A and Serv B had their own IP addresses. We had this application that need an IP address point to a secondary IP address on Sever A (now server A has 2 IPs). Essentially when failover of application was required from Server A to Server B, we would shutdown the appplication on Server A, remove the secondary IP from Server A and add the same IP to Server B as Secondary and start the application on Server B.&lt;BR /&gt;&lt;BR /&gt;Let me know if any clarifications are needed&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Mobeen</description>
    <pubDate>Thu, 21 Apr 2005 06:11:08 GMT</pubDate>
    <dc:creator>Mobeen_1</dc:creator>
    <dc:date>2005-04-21T06:11:08Z</dc:date>
    <item>
      <title>VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529053#M70635</link>
      <description>I have two non clustered nodes with a TCP cluster alias address that I would like to fail between.  I am forced to have users connect to an address so I must have an address not a hostname bounce between two systems.  How can I do this?  I see the address defined on both systems when I issue a show interface /cluster and when I issue an arp -a command I see the MAC of one of the servers but it does not fail over when the host tied to the MAC address dies unless I change the interface with an ifconfig command.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Kelly</description>
      <pubDate>Thu, 21 Apr 2005 01:35:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529053#M70635</guid>
      <dc:creator>Kelly Phillipps</dc:creator>
      <dc:date>2005-04-21T01:35:10Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529054#M70636</link>
      <description>Hi Kelly,&lt;BR /&gt;&lt;BR /&gt;I am afraid that you are expecting a little bit too much of OpenVMS.&lt;BR /&gt;&lt;BR /&gt;You cannot expect an IP cluster alias to work between non clustered systems.&lt;BR /&gt;&lt;BR /&gt;That said, I think you mean to say that your systems ARE clustered. But we would need more information. Which versions of OpenVMS and TCP/IP are you using? Can you show how you configured the cluster alias?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Bart Zorn&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Apr 2005 01:54:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529054#M70636</guid>
      <dc:creator>Bart Zorn_1</dc:creator>
      <dc:date>2005-04-21T01:54:26Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529055#M70637</link>
      <description>Kelly,&lt;BR /&gt;&lt;BR /&gt;as Bart said, the TCPIP cluster is based on the OpenVMS cluster functionality.&lt;BR /&gt;&lt;BR /&gt;You may be able to implement your own 'poor mans alias' by writing some background batch job, which will PING the other node and if the PING fails, activate the 'cluster' alias on the local node. But this will not be as perfect as a real cluster alias.&lt;BR /&gt;&lt;BR /&gt;TCPIP V5.4 has introduced failSAFE IP, which replaces the traditional TCPIP cluster alias, but to allow failover between 2 nodes, those nodes also need to be clustered.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 21 Apr 2005 02:04:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529055#M70637</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-04-21T02:04:03Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529056#M70638</link>
      <description>TCP cluster alias is only feasable in a cluster, as stated by others. However, I don't see real use for it at all, since TCPIP (neither the current V4, not the next generation (V6)) is capable of handling the cluster-concept. &lt;BR /&gt;You will need to rely on external (= non-VMS-like) solutions, involving METRIC service and an (external) DNS server, and a name to name the combination of those unrelated machines.&lt;BR /&gt;The DNS server will need to allow periodic updates to translate the clusterNAME to a specific machine, by his IP address, based on the METRIC outcome, by time or whatever scheme you wish to use.&lt;BR /&gt;The advantage is the machines do not need to be clustered - it's the "simple", *x-like solution. I don't say "bad" - it seems to work in that environment, but it's defenitely NOT the VMS-way of doing things, where synchronisation isn't even an issue.&lt;BR /&gt;&lt;BR /&gt;When TCPIP would have had the cluster awareness of DECNet, we wouldn't require this. One of those missed opportunities :-(&lt;BR /&gt;&lt;BR /&gt;Willem</description>
      <pubDate>Thu, 21 Apr 2005 02:41:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529056#M70638</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2005-04-21T02:41:43Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529057#M70639</link>
      <description>you need a third system to monitor availability  (as is done for the METRIC, load broker stuff) to make this work properly. I guess you can have the standby node connect to the live node and when the connection breaks modify the config with a ifconfig command.</description>
      <pubDate>Thu, 21 Apr 2005 03:31:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529057#M70639</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-04-21T03:31:15Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529058#M70640</link>
      <description>Kelly,&lt;BR /&gt;I think this should probably help you. Years back i was confronted with similar issue and this is what was the solution we adopted&lt;BR /&gt;&lt;BR /&gt;1. An application that was very critical &lt;BR /&gt;   required it to be tied to an IP address&lt;BR /&gt;   rather than a host name.&lt;BR /&gt;&lt;BR /&gt;2. I had 2 stand alone servers, one was live&lt;BR /&gt;   and the other backup.&lt;BR /&gt;&lt;BR /&gt;3. In efforts to fail over this application &lt;BR /&gt;   or switch the application from Server A&lt;BR /&gt;   to Server B, what we did was define&lt;BR /&gt;   a secondary IP address&lt;BR /&gt;&lt;BR /&gt;To give you little more details, each of my servers Serv A and Serv B had their own IP addresses. We had this application that need an IP address point to a secondary IP address on Sever A (now server A has 2 IPs). Essentially when failover of application was required from Server A to Server B, we would shutdown the appplication on Server A, remove the secondary IP from Server A and add the same IP to Server B as Secondary and start the application on Server B.&lt;BR /&gt;&lt;BR /&gt;Let me know if any clarifications are needed&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Mobeen</description>
      <pubDate>Thu, 21 Apr 2005 06:11:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529058#M70640</guid>
      <dc:creator>Mobeen_1</dc:creator>
      <dc:date>2005-04-21T06:11:08Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529059#M70641</link>
      <description>What a great responce!  I appreciate the information.  &lt;BR /&gt;&lt;BR /&gt;I took the cluster apart recently and now have two totally independent nodes.  I have written a C program that accesses the application health on these two nodes and initiates a fail over.  I am using decent to do some of the inter-node communications (using TCP to see if TCP is ok did not make as much sense)  I was looking for tricks to play so I could get a cluster alias working and hoped that a TCP service would alias without a VMS cluster.  For those wondering, I un-clustered because a VMS cluster is more reliable than a stand alone system but sometimes there are issues that cannot be totally eliminated which affect the whole cluster.  Most applications cannot be completely distributed and so clustering is best but mine can be completely redundant when I get a redirector working to route the requests to the systems that work. &lt;BR /&gt;Thanks,&lt;BR /&gt;Kelly&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Apr 2005 14:05:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529059#M70641</guid>
      <dc:creator>Kelly Phillipps</dc:creator>
      <dc:date>2005-04-21T14:05:44Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529060#M70642</link>
      <description>Hmmm.  Decnet is "decent".  I knew that all along!&lt;BR /&gt;&lt;BR /&gt;Seriously, I've been involved with doing redundant VMS systems that aren't clustered -- to avoid the cluster transition time.  The time can be made smaller, but not zero.  There are some applications that can't take that.  Process automation, for one.&lt;BR /&gt;&lt;BR /&gt;It's a lot harder than you think.&lt;BR /&gt;&lt;BR /&gt;What happens if DECnet connectivity between the nodes fails?  Who "wins" the race condition, or do both become "primary"?  And how do you force one mode to become the primary so you can do maintenance on the other?&lt;BR /&gt;&lt;BR /&gt;I've seen this handled with a special Q-bus arbitration card.  I've also seen a pair of programmable controllers connected via serial ports.&lt;BR /&gt;&lt;BR /&gt;In all cases, it seems to need a third system to be the "tiebreaker".  Of course, if that node fails, then what?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 22 Apr 2005 08:40:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529060#M70642</guid>
      <dc:creator>Stanley F Quayle</dc:creator>
      <dc:date>2005-04-22T08:40:32Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529061#M70643</link>
      <description>&amp;gt; I've been involved with doing redundant VMS systems that aren't clustered -- to avoid the cluster transition time. The time can be made smaller, but not zero. There are some applications that can't take that. Process automation, for one. &amp;lt;&lt;BR /&gt;&lt;BR /&gt;I've started discussion of this issue in another thread, entitled "Real-tine process control in an OpenVMS Cluster environment".</description>
      <pubDate>Fri, 22 Apr 2005 09:00:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529061#M70643</guid>
      <dc:creator>Keith Parris</dc:creator>
      <dc:date>2005-04-22T09:00:32Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529062#M70644</link>
      <description>"In all cases, it seems to need a third system to be the "tiebreaker". Of course, if that node fails, then what?"&lt;BR /&gt;&lt;BR /&gt;You have to have the third node (which does not have to be a VMS system - it can be other hardware like Stanley said). If that node fails then the application stays how it is and you lose the ability to automagically switch to standby.</description>
      <pubDate>Fri, 22 Apr 2005 12:06:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529062#M70644</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-04-22T12:06:28Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529063#M70645</link>
      <description>The application is serving a TCP request and either system can respond which makes this an easier task than some of the challanges other have listed here.  The wathdog program I have created runs on both hosts and watches both hosts so there are four examinations being done.  As long as any one node is providing a good responce the system does not take action it only notifies.  If the system that was responding goes bad then both systems will attempt to remove the alias on the formerly good system and define it on the fail to system.  Then the "bad system will be reset (even rebooted if it is called for).  Since TCP may be the problem I thought it would not be good to use TCP to monitor TCP.  In my tests there seems to be little ill effect (at least the customers don't see it) when both systems have the alias defined it seems to go to the last one who grabbed it.  I think I am going to add a heartbeat on the serial interface and call it good at this point.  The next generation is a third system (as suggested in this thread) that will be a forwarder which examines the responces and sends requests to both boxes or omits a system if it fails to return a proper responce.  I than all for the great responces.  I am still pondering them.&lt;BR /&gt;&lt;BR /&gt;Kelly</description>
      <pubDate>Tue, 26 Apr 2005 23:22:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529063#M70645</guid>
      <dc:creator>Kelly Phillipps</dc:creator>
      <dc:date>2005-04-26T23:22:59Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529064#M70646</link>
      <description>The article Keith is talking about (Gogle found it, the HP search didn't ...)&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=861945" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=861945&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 27 Apr 2005 01:04:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529064#M70646</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-27T01:04:51Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Failover</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529065#M70647</link>
      <description>How about assigning some points to the responses?&lt;BR /&gt;&lt;BR /&gt;Pointer to help on points:&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/helptips.do?#33" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/helptips.do?#33&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks in advance.&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Aug 2005 07:19:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-failover/m-p/3529065#M70647</guid>
      <dc:creator>Stanley F Quayle</dc:creator>
      <dc:date>2005-08-30T07:19:40Z</dc:date>
    </item>
  </channel>
</rss>

