<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Redundant cluster interconnect in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754693#M75173</link>
    <description>I have previously moved from MC to FDDI.&lt;BR /&gt;No problem. Take out the MC related system parameters. Consider the use of larger packet sizes. See the documentation&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/732FINAL/6318/6318PRO.HTML" target="_blank"&gt;http://h71000.www7.hp.com/doc/732FINAL/6318/6318PRO.HTML&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/731FINAL/4477/4477PRO.HTML" target="_blank"&gt;http://h71000.www7.hp.com/doc/731FINAL/4477/4477PRO.HTML&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 20 Mar 2006 09:20:53 GMT</pubDate>
    <dc:creator>Ian Miller.</dc:creator>
    <dc:date>2006-03-20T09:20:53Z</dc:date>
    <item>
      <title>Redundant cluster interconnect</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754690#M75170</link>
      <description>Hello&lt;BR /&gt;Having a two node alpha cluster with MC and ethernet for cluster interconnect. One cluster node will have to be moved to another building (not far away), so MC will be replaced with Gigabit ethernet. &lt;BR /&gt;This cluster must have more then one cluster interconnect component, so my first idea is to use two gigabit ethernet NICs for cluster interconnect.&lt;BR /&gt;Can someone give me something like "best practice" for such a solution and share his (her) experience? OS version 7.3-2. Can be upgraded to 8.2.&lt;BR /&gt;Thanks</description>
      <pubDate>Mon, 20 Mar 2006 07:58:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754690#M75170</guid>
      <dc:creator>Vladimir Fabecic</dc:creator>
      <dc:date>2006-03-20T07:58:15Z</dc:date>
    </item>
    <item>
      <title>Re: Redundant cluster interconnect</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754691#M75171</link>
      <description>I have a lot of 2nd hand experience.  The simple solution s two network paths.  Often people have a dedicated path, and use scacp to raise it's priority, while the second, lower priority is the corporate backbone. &lt;BR /&gt;&lt;BR /&gt;Remember that SCS communication can't be routed, but can be bridged.&lt;BR /&gt;&lt;BR /&gt;Check the appendex in the Guidelines for  OpenVMS Cluster Management for wide area networks. It will provide all the required specs.  There is an entire chapter on it.&lt;BR /&gt;&lt;BR /&gt;Often you will need to provide network specs to your network managers, and they are all in there.&lt;BR /&gt;&lt;BR /&gt;Good luck.</description>
      <pubDate>Mon, 20 Mar 2006 08:03:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754691#M75171</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2006-03-20T08:03:06Z</dc:date>
    </item>
    <item>
      <title>Re: Redundant cluster interconnect</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754692#M75172</link>
      <description>Last year we moved two large production clusters (7.3-2) from MC to GigE. Changeover went very smoothly, and we have had no operational problems since then.&lt;BR /&gt;&lt;BR /&gt;Duncan&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Mar 2006 08:44:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754692#M75172</guid>
      <dc:creator>Duncan Morris</dc:creator>
      <dc:date>2006-03-20T08:44:41Z</dc:date>
    </item>
    <item>
      <title>Re: Redundant cluster interconnect</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754693#M75173</link>
      <description>I have previously moved from MC to FDDI.&lt;BR /&gt;No problem. Take out the MC related system parameters. Consider the use of larger packet sizes. See the documentation&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/732FINAL/6318/6318PRO.HTML" target="_blank"&gt;http://h71000.www7.hp.com/doc/732FINAL/6318/6318PRO.HTML&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/731FINAL/4477/4477PRO.HTML" target="_blank"&gt;http://h71000.www7.hp.com/doc/731FINAL/4477/4477PRO.HTML&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Mar 2006 09:20:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754693#M75173</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-03-20T09:20:53Z</dc:date>
    </item>
    <item>
      <title>Re: Redundant cluster interconnect</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754694#M75174</link>
      <description>Used MC, FDDI, FastE. VMS 7.2, 7.3-1 &amp;amp; -2. No issues. Just watch there are no segments in your network that run at low speeds (we once found a segment at 10MB Half Duplex). Whilst not a problem in itself, it will increase your shadow set copy operations.&lt;BR /&gt;&lt;BR /&gt;Kind Regards&lt;BR /&gt;John.</description>
      <pubDate>Mon, 20 Mar 2006 09:33:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754694#M75174</guid>
      <dc:creator>John Abbott_2</dc:creator>
      <dc:date>2006-03-20T09:33:11Z</dc:date>
    </item>
    <item>
      <title>Re: Redundant cluster interconnect</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754695#M75175</link>
      <description>If youâ  re relying on a 3rd party to provide the building to building network link, apart from the spec. and guarantees of failover... it's worth asking where the failover routes via for resilience and latency knowledge. I once heard of a backup link that increased the routed by such as distance that it gave noticeable performance drop. (Noticeable as it also server ip telnet traffic too).&lt;BR /&gt;&lt;BR /&gt;J.</description>
      <pubDate>Mon, 20 Mar 2006 09:43:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/redundant-cluster-interconnect/m-p/3754695#M75175</guid>
      <dc:creator>John Abbott_2</dc:creator>
      <dc:date>2006-03-20T09:43:13Z</dc:date>
    </item>
  </channel>
</rss>

