<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Replacing Memory Channel with Gigabit Ethernet in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631093#M7881</link>
    <description>An interesting factor.  While 100 mb cards were recommended to hard code fast full duplex, as digital/hcompaq didn't conform to standards, on gigabit ethernet use autonegotiate.&lt;BR /&gt;&lt;BR /&gt;If you use a Cicsco swith you can do a show tech and it will show the settings on all the ports without any loss of security.&lt;BR /&gt;&lt;BR /&gt;Bob</description>
    <pubDate>Thu, 22 Sep 2005 03:18:17 GMT</pubDate>
    <dc:creator>comarow</dc:creator>
    <dc:date>2005-09-22T03:18:17Z</dc:date>
    <item>
      <title>Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631085#M7873</link>
      <description>We have OVMS 7.3-2 cluster with 2 ES40, 1 ES45 and DS10, conecetd with Memory Channel and Gigabit.&lt;BR /&gt;We will add ES47 and replace Memory Channel with another Gigbit NICs on all nodes.&lt;BR /&gt;Also we will move 2 ES40 to remote location few hundred meters away. &lt;BR /&gt;We are afraid, that replacing MC with Giga will reduce performance and slow down production.&lt;BR /&gt;Any advice on  what we have to care about (like changing any parameters)  is appreciated.&lt;BR /&gt;Is any sense, to remain MC only between main production systems (ES47,ES45,DS10) ?</description>
      <pubDate>Wed, 21 Sep 2005 05:37:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631085#M7873</guid>
      <dc:creator>Andrej Jerina</dc:creator>
      <dc:date>2005-09-21T05:37:12Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631086#M7874</link>
      <description>If you introduce Gigabit Ethernet between all nodes then you should consider the use of jumbo frames especilly if you use host based shadowing. The cpu usage of the ethernet driver is higher than the memory channel driver and the latency is higher. &lt;BR /&gt;&lt;BR /&gt;Keeping the MC between some nodes may help depending on the way the workload is distributed.</description>
      <pubDate>Wed, 21 Sep 2005 05:58:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631086#M7874</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-09-21T05:58:33Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631087#M7875</link>
      <description>A measurement of VMS distributed lock request latency was done and the result showed 120 MicroSeconds for Memory Channel and 200 Microseconds for Gigabit ethernet.&lt;BR /&gt;&lt;BR /&gt;This means remote locking is slower with GBE compared with MC. The impact of this on your cluster depends on the way locks are used.&lt;BR /&gt;&lt;BR /&gt;See&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/freeware/freeware60/kp_clustertools/" target="_blank"&gt;http://h71000.www7.hp.com/freeware/freeware60/kp_clustertools/&lt;/A&gt;&lt;BR /&gt;and&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/freeware/freeware60/kp_locktools/" target="_blank"&gt;http://h71000.www7.hp.com/freeware/freeware60/kp_locktools/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;for various tools to monitor your cluster.</description>
      <pubDate>Wed, 21 Sep 2005 06:19:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631087#M7875</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-09-21T06:19:26Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631088#M7876</link>
      <description>Gigabit Ethernet is the way to go. We run a split cluster of 4 ES45. The two sites have two fibre cables of 8 and 4 km in length. Locks peak at about 2,500,000 and remastering issues are not a problem. Good workload management will prevent constant dynamic remastering. &lt;BR /&gt;&lt;BR /&gt;IMO move away from propriety solutions like Memory Channel. View the interconnects as networks and take advantage of all the good networking devices out there.&lt;BR /&gt;&lt;BR /&gt;Tom</description>
      <pubDate>Wed, 21 Sep 2005 19:27:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631088#M7876</guid>
      <dc:creator>Thomas Ritter</dc:creator>
      <dc:date>2005-09-21T19:27:02Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631089#M7877</link>
      <description>Memory Channel, was designed more around latency issues than Bandwidth issues. Do not confuse the two... &lt;BR /&gt;&lt;BR /&gt;q&lt;BR /&gt;</description>
      <pubDate>Wed, 21 Sep 2005 21:05:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631089#M7877</guid>
      <dc:creator>Peter Quodling</dc:creator>
      <dc:date>2005-09-21T21:05:29Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631090#M7878</link>
      <description>Andrej,&lt;BR /&gt;&lt;BR /&gt; With cluster interconnects, it's usually a case of more is better.&lt;BR /&gt;&lt;BR /&gt; If you don't have a compelling reason for disconnecting the memory channel, then leave it alone, at least for the nodes within the distance limits.&lt;BR /&gt;&lt;BR /&gt; If there are more unused network adapters, then just connect them all - for example, a "private" hub with some or all nodes connected. No need to configure them, cluster software will automatically find the paths and make use of them. Similarly if you have unused 100Mb NICs, switches and cables are very cheap, and they provide redundant connections between nodes.&lt;BR /&gt;&lt;BR /&gt; At the very least, the fastest path will be used. With more recent versions of OpenVMS will load balance across all available interconnects.&lt;BR /&gt;&lt;BR /&gt; For future planning, I agree with Thomas. Go with Gb ethernet, rather than MC as a cluster interconnect.</description>
      <pubDate>Wed, 21 Sep 2005 23:44:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631090#M7878</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2005-09-21T23:44:12Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631091#M7879</link>
      <description>We did this a number of months ago, after years of problems with MC.&lt;BR /&gt;&lt;BR /&gt;I'm glad to say everything went without any problems.&lt;BR /&gt;&lt;BR /&gt;As John said, use a private network for the Cluster traffic. We used 2 CISCO switches, with links to each node in our 3-node cluster.&lt;BR /&gt;&lt;BR /&gt;Setting DECNET and Cluster traffic to use just these paths was a bit fidly, but worth it in the end.&lt;BR /&gt;&lt;BR /&gt;2 of the nodes are 75% loaded ES40's, and I've not seen any lock manager issues so far.&lt;BR /&gt;&lt;BR /&gt;Rob.</description>
      <pubDate>Thu, 22 Sep 2005 02:56:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631091#M7879</guid>
      <dc:creator>Robert Atkinson</dc:creator>
      <dc:date>2005-09-22T02:56:34Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631092#M7880</link>
      <description>keep the GBE used for cluster traffic away from the network people - they often don't understand the availability requirements or the fact that its not IP and can cause trouble.</description>
      <pubDate>Thu, 22 Sep 2005 03:10:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631092#M7880</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-09-22T03:10:30Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631093#M7881</link>
      <description>An interesting factor.  While 100 mb cards were recommended to hard code fast full duplex, as digital/hcompaq didn't conform to standards, on gigabit ethernet use autonegotiate.&lt;BR /&gt;&lt;BR /&gt;If you use a Cicsco swith you can do a show tech and it will show the settings on all the ports without any loss of security.&lt;BR /&gt;&lt;BR /&gt;Bob</description>
      <pubDate>Thu, 22 Sep 2005 03:18:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631093#M7881</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-09-22T03:18:17Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631094#M7882</link>
      <description>I'd suggest dual Gigabit ethernet with two physically separate private GigE LANs (not VLAN's out of a big switch, use one dedicated small  switch at each site) for the cluster traffic, then use other LAN interfaces for 'user' traffic (TCPIP, DECnet, LAT etc.).&lt;BR /&gt;&lt;BR /&gt;How you set it up will depend very much on your workload (eg: locking implications) and other system features you're using (eg: HBVS). Look at the storage subsystem, how that's connected and what kind of load the application imposes on the storage subsystem. Is it a disc IO intensive application, or CPU intensive, or LAN IO intensive, or whatever.&lt;BR /&gt;&lt;BR /&gt;In general dual GigE seems to work pretty well compared with MC. As Ian mentioned - jumbo frames can help - which is another good reason for making the interconnects private and for clustering use only.</description>
      <pubDate>Thu, 22 Sep 2005 03:57:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631094#M7882</guid>
      <dc:creator>Colin Butcher</dc:creator>
      <dc:date>2005-09-22T03:57:10Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631095#M7883</link>
      <description>I'm getting ready to do the same thing.&lt;BR /&gt;Current have Memory Channel between 2 buildings 1200' apart.  One node (2 node cluster) is moving 20 miles away.  I've opted for using 2 gig-e connections. They will be point-to-point connections.&lt;BR /&gt;&lt;BR /&gt;A suggestion I got from HP was to add the following line in SYSTARTUP_VMS.COM:&lt;BR /&gt;&lt;BR /&gt;$ MCR SCACP SET LAN /PRIORITY=10 EWA&lt;BR /&gt;&lt;BR /&gt;and since I will have two:&lt;BR /&gt;&lt;BR /&gt;$ MCR SCACP SET LAN /PRIORITY=10 EWB&lt;BR /&gt;&lt;BR /&gt;It's not a permanet setting, that's why it needs execute at startup.&lt;BR /&gt;</description>
      <pubDate>Fri, 23 Sep 2005 07:26:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631095#M7883</guid>
      <dc:creator>Bruce Aschenbrenner</dc:creator>
      <dc:date>2005-09-23T07:26:43Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631096#M7884</link>
      <description>re comarow:&lt;BR /&gt;&lt;BR /&gt;&amp;gt;An interesting factor. While 100 mb &lt;BR /&gt;&amp;gt;cards were recommended to hard code &lt;BR /&gt;&amp;gt;fast full duplex, as digital/hcompaq&lt;BR /&gt;&amp;gt;didn't conform to standards, on &lt;BR /&gt;&amp;gt;gigabit ethernet use autonegotiate.&lt;BR /&gt;&lt;BR /&gt;  Not true. There were no standards that digital/hcompaq didn't conform to! I don't know where it came from, but "Alpha's don't support autonegotiation" is now, and has always been, a myth.&lt;BR /&gt;&lt;BR /&gt;  The recommendation from HP Customer Support Centres is to use autonegotiate for ALL NICs and switch/hub ports. Hard setting your cards to 100/Full will NOT work if the switch port is set to auto.&lt;BR /&gt;&lt;BR /&gt;  The rule is that both NIC and switch port MUST be set the same. Either both hard set to a specific speed and duplex, or both set to autonegotiate. Since most modern switches and hubs will have autonegotiate by default, that's what you should set your OpenVMS systems to.&lt;BR /&gt;</description>
      <pubDate>Sun, 25 Sep 2005 17:25:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631096#M7884</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2005-09-25T17:25:18Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631097#M7885</link>
      <description>Ian Miller wrote:&lt;BR /&gt;&lt;BR /&gt;"keep the GBE used for cluster traffic away from the network people - they often don't understand the availability requirements or the fact that its not IP and can cause trouble."&lt;BR /&gt;&lt;BR /&gt;This is indeed a big problem. One might even consider using an other brand of switches (E.g. Digital Networks when the whole company is using Cisco). That way it may be easier to argue that these are part of the system and not the network.&lt;BR /&gt;&lt;BR /&gt;YMMV,&lt;BR /&gt;&lt;BR /&gt;Bart Zorn&lt;BR /&gt;</description>
      <pubDate>Mon, 26 Sep 2005 02:14:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631097#M7885</guid>
      <dc:creator>Bart Zorn_1</dc:creator>
      <dc:date>2005-09-26T02:14:59Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631098#M7886</link>
      <description>Re Ian &amp;amp; Bart:&lt;BR /&gt;&lt;BR /&gt;Let me one again bring Tom Speake in the floodlight.&lt;BR /&gt;&lt;BR /&gt;He once was Manager Disaster Tolerant Computing at Digital.&lt;BR /&gt;&lt;BR /&gt;He did seminars on DT then.&lt;BR /&gt;&lt;BR /&gt;His basic rule then was (and he still holds on to that as per our meeting again atlast Bootcamp):&lt;BR /&gt;&lt;BR /&gt;The Cluster Interconnect is __NOT__ a network connection. It is the __SYSTEM BUS__.&lt;BR /&gt;-- even though it might be 800 KM long, and use network hardware --&lt;BR /&gt;&lt;BR /&gt;We have fought hard to get it accepted, but have been VERY happy with it on more than one occasion!&lt;BR /&gt;&lt;BR /&gt;Please, anybody, feel free to quote Tom on this, he still feels proud for every implementation that his text has helped to realise!&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Mon, 26 Sep 2005 08:13:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631098#M7886</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-09-26T08:13:12Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631099#M7887</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;I'd be glad to discuss that with you. That was advise from the NSU group and years of getting clusters going.&lt;BR /&gt;</description>
      <pubDate>Wed, 28 Sep 2005 12:15:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631099#M7887</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-09-28T12:15:28Z</dc:date>
    </item>
    <item>
      <title>Re: Replacing Memory Channel with Gigabit Ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631100#M7888</link>
      <description>the FDDI has an up to 200Mbps thru put and is capable of 124 miles transmission without losing stamina.  Laser optic option as opposed to multi-light one would be the preferred optic setting.  If you are running 7.3 or above, multi protocol is supported for 7 modes of media transmission - CI, FDDI, CDDI, DSSI, SCSI, MC, FC, GbEther.. my 2 cents.  This fddi should be an inexpensive option that you can find on "used-car" lots.&lt;BR /&gt;&lt;BR /&gt;jzr</description>
      <pubDate>Fri, 28 Oct 2005 15:03:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/replacing-memory-channel-with-gigabit-ethernet/m-p/3631100#M7888</guid>
      <dc:creator>John Robles_1</dc:creator>
      <dc:date>2005-10-28T15:03:57Z</dc:date>
    </item>
  </channel>
</rss>

