Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Replacing Memory Channel with Gigabit Ethernet

SOLVED
Go to solution
Andrej Jerina
Occasional Contributor

Replacing Memory Channel with Gigabit Ethernet

We have OVMS 7.3-2 cluster with 2 ES40, 1 ES45 and DS10, conecetd with Memory Channel and Gigabit.
We will add ES47 and replace Memory Channel with another Gigbit NICs on all nodes.
Also we will move 2 ES40 to remote location few hundred meters away.
We are afraid, that replacing MC with Giga will reduce performance and slow down production.
Any advice on what we have to care about (like changing any parameters) is appreciated.
Is any sense, to remain MC only between main production systems (ES47,ES45,DS10) ?
15 REPLIES
Ian Miller.
Honored Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

If you introduce Gigabit Ethernet between all nodes then you should consider the use of jumbo frames especilly if you use host based shadowing. The cpu usage of the ethernet driver is higher than the memory channel driver and the latency is higher.

Keeping the MC between some nodes may help depending on the way the workload is distributed.
____________________
Purely Personal Opinion
Ian Miller.
Honored Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

A measurement of VMS distributed lock request latency was done and the result showed 120 MicroSeconds for Memory Channel and 200 Microseconds for Gigabit ethernet.

This means remote locking is slower with GBE compared with MC. The impact of this on your cluster depends on the way locks are used.

See
http://h71000.www7.hp.com/freeware/freeware60/kp_clustertools/
and
http://h71000.www7.hp.com/freeware/freeware60/kp_locktools/

for various tools to monitor your cluster.
____________________
Purely Personal Opinion
Thomas Ritter
Respected Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

Gigabit Ethernet is the way to go. We run a split cluster of 4 ES45. The two sites have two fibre cables of 8 and 4 km in length. Locks peak at about 2,500,000 and remastering issues are not a problem. Good workload management will prevent constant dynamic remastering.

IMO move away from propriety solutions like Memory Channel. View the interconnects as networks and take advantage of all the good networking devices out there.

Tom
Peter Quodling
Trusted Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

Memory Channel, was designed more around latency issues than Bandwidth issues. Do not confuse the two...

q
Leave the Money on the Fridge.
John Gillings
Honored Contributor
Solution

Re: Replacing Memory Channel with Gigabit Ethernet

Andrej,

With cluster interconnects, it's usually a case of more is better.

If you don't have a compelling reason for disconnecting the memory channel, then leave it alone, at least for the nodes within the distance limits.

If there are more unused network adapters, then just connect them all - for example, a "private" hub with some or all nodes connected. No need to configure them, cluster software will automatically find the paths and make use of them. Similarly if you have unused 100Mb NICs, switches and cables are very cheap, and they provide redundant connections between nodes.

At the very least, the fastest path will be used. With more recent versions of OpenVMS will load balance across all available interconnects.

For future planning, I agree with Thomas. Go with Gb ethernet, rather than MC as a cluster interconnect.
A crucible of informative mistakes
Robert Atkinson
Respected Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

We did this a number of months ago, after years of problems with MC.

I'm glad to say everything went without any problems.

As John said, use a private network for the Cluster traffic. We used 2 CISCO switches, with links to each node in our 3-node cluster.

Setting DECNET and Cluster traffic to use just these paths was a bit fidly, but worth it in the end.

2 of the nodes are 75% loaded ES40's, and I've not seen any lock manager issues so far.

Rob.
Ian Miller.
Honored Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

keep the GBE used for cluster traffic away from the network people - they often don't understand the availability requirements or the fact that its not IP and can cause trouble.
____________________
Purely Personal Opinion
comarow
Trusted Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

An interesting factor. While 100 mb cards were recommended to hard code fast full duplex, as digital/hcompaq didn't conform to standards, on gigabit ethernet use autonegotiate.

If you use a Cicsco swith you can do a show tech and it will show the settings on all the ports without any loss of security.

Bob
Colin Butcher
Esteemed Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

I'd suggest dual Gigabit ethernet with two physically separate private GigE LANs (not VLAN's out of a big switch, use one dedicated small switch at each site) for the cluster traffic, then use other LAN interfaces for 'user' traffic (TCPIP, DECnet, LAT etc.).

How you set it up will depend very much on your workload (eg: locking implications) and other system features you're using (eg: HBVS). Look at the storage subsystem, how that's connected and what kind of load the application imposes on the storage subsystem. Is it a disc IO intensive application, or CPU intensive, or LAN IO intensive, or whatever.

In general dual GigE seems to work pretty well compared with MC. As Ian mentioned - jumbo frames can help - which is another good reason for making the interconnects private and for clustering use only.
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
Bruce Aschenbrenner
Occasional Visitor

Re: Replacing Memory Channel with Gigabit Ethernet

I'm getting ready to do the same thing.
Current have Memory Channel between 2 buildings 1200' apart. One node (2 node cluster) is moving 20 miles away. I've opted for using 2 gig-e connections. They will be point-to-point connections.

A suggestion I got from HP was to add the following line in SYSTARTUP_VMS.COM:

$ MCR SCACP SET LAN /PRIORITY=10 EWA

and since I will have two:

$ MCR SCACP SET LAN /PRIORITY=10 EWB

It's not a permanet setting, that's why it needs execute at startup.
John Gillings
Honored Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

re comarow:

>An interesting factor. While 100 mb
>cards were recommended to hard code
>fast full duplex, as digital/hcompaq
>didn't conform to standards, on
>gigabit ethernet use autonegotiate.

Not true. There were no standards that digital/hcompaq didn't conform to! I don't know where it came from, but "Alpha's don't support autonegotiation" is now, and has always been, a myth.

The recommendation from HP Customer Support Centres is to use autonegotiate for ALL NICs and switch/hub ports. Hard setting your cards to 100/Full will NOT work if the switch port is set to auto.

The rule is that both NIC and switch port MUST be set the same. Either both hard set to a specific speed and duplex, or both set to autonegotiate. Since most modern switches and hubs will have autonegotiate by default, that's what you should set your OpenVMS systems to.
A crucible of informative mistakes
Bart Zorn_1
Trusted Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

Ian Miller wrote:

"keep the GBE used for cluster traffic away from the network people - they often don't understand the availability requirements or the fact that its not IP and can cause trouble."

This is indeed a big problem. One might even consider using an other brand of switches (E.g. Digital Networks when the whole company is using Cisco). That way it may be easier to argue that these are part of the system and not the network.

YMMV,

Bart Zorn
Jan van den Ende
Honored Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

Re Ian & Bart:

Let me one again bring Tom Speake in the floodlight.

He once was Manager Disaster Tolerant Computing at Digital.

He did seminars on DT then.

His basic rule then was (and he still holds on to that as per our meeting again atlast Bootcamp):

The Cluster Interconnect is __NOT__ a network connection. It is the __SYSTEM BUS__.
-- even though it might be 800 KM long, and use network hardware --

We have fought hard to get it accepted, but have been VERY happy with it on more than one occasion!

Please, anybody, feel free to quote Tom on this, he still feels proud for every implementation that his text has helped to realise!

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
comarow
Trusted Contributor

Re: Replacing Memory Channel with Gigabit Ethernet

John,

I'd be glad to discuss that with you. That was advise from the NSU group and years of getting clusters going.
John Robles_1
Occasional Visitor

Re: Replacing Memory Channel with Gigabit Ethernet

the FDDI has an up to 200Mbps thru put and is capable of 124 miles transmission without losing stamina. Laser optic option as opposed to multi-light one would be the preferred optic setting. If you are running 7.3 or above, multi protocol is supported for 7 modes of media transmission - CI, FDDI, CDDI, DSSI, SCSI, MC, FC, GbEther.. my 2 cents. This fddi should be an inexpensive option that you can find on "used-car" lots.

jzr