Operating System - OpenVMS
1832251 Members
2624 Online
110041 Solutions
New Discussion

Re: Building a New GbE Cluster. Need Suggestions!

 
SOLVED
Go to solution
Rick Dyson
Valued Contributor

Building a New GbE Cluster. Need Suggestions!

I am new to GbE. I will be building a v7.3-2 cluster of ES40s that will have dual GbE with a long separation distance. The network interconnects for cluster and network will be out of my hands directly, but I can request configs. :) The storage interconnect is FC between dual EMA12k units. My current cluster is a fully FDDI config that has to be retired.

Within GbE discussions, I have heard talk of 'jumbo frames', splitting the SCS & DECnet traffic to just one adapater, etc. On the VMS side, is there any config to get 'jumbo frames'? Is this similar (or same) as SYSGEN parameter that is 4k for optimal FDDI use?

The network equipment will be Cisco. What do I need to tell the network engineer to setup? None of them know VMS or any of it's protocols except to be automatically against them because they are 'different'. :) I will no longer be able to exclusively handle all the network switches.

Is it difficult to setup to use one GbE NIC for "DEC" traffic and the other for TCP/IP but allow for automatic failover should one or the other fail? I plan on using current TCPIP and I have read about the ability to configure it for IP failover, so I assume that is what I am looking for, right?

I will also have a dual 10/100 ethernet card that I want to get setup on an independent LAN as a SCS failover should all the GbE drop out. Just to keep my cluster up until the users can get back.

Any help at pointers to known good refs or the specific manuals will be greatly appreciated. I am under pressure to get this built and running quickly and I am not fully up to speed yet. :)

rick
10 REPLIES 10
Jan van den Ende
Honored Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Rick,

_THE_ main issue in situations where the network people are not experts in VMS is to convince management of this:

"The cluster interconnect is ___NOT___ a network connection, is is the ___SYSTEM BUS___" (C) Tom Speake, former Manager Disaster Tolerant systems, Digital. (Tnx again, Tom!)

In your case, I would fight with all I have got to retain DSSI as a backup interconnect, and have that under VMS system management control.

Cisco have a habit of setting up such environments with "Spanning Tree", which imply connection failover times of dozens of seconds.
RECNXTINTERVAL had better be longer than that, if you have no independent interconnect to fall back upon.

Success.

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Rick Dyson
Valued Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Noted! This NE is pretty decent, and I believe will work with me on achieving the GbE connection as being critical and dedicated if I can get that cleanly setup. I just want to be able to guide him. In fact, he is the first one I ever heard use the phrase 'jumbo frames'... I just need to make it easy for them to accomodate me in the manner I would prefer. I will be meeting with him tomorrow to sketch out a plan.

I do like the quote, and will try and leverage it!

rick
Ian Miller.
Honored Contributor
Solution

Re: Building a New GbE Cluster. Need Suggestions!

Jumbo frames are a good thing if doing host based shadowing.

See
http://h71000.www7.hp.com/doc/732final/6631/6631pro_006.html#index_x_207
____________________
Purely Personal Opinion
Ian Miller.
Honored Contributor

Re: Building a New GbE Cluster. Need Suggestions!

If you are looking at the new lan failover stuff then
http://h71000.www7.hp.com/doc/82FINAL/aa-pv5nj-tk/00/01/118-con.html

The TCPIP failover is different
http://h71000.www7.hp.com/doc/82FINAL/6524/6524pro_002.html#fs_interf
____________________
Purely Personal Opinion
Bojan Nemec
Honored Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Rick,

Dont forget the autonegotiation settings of the GbE. It can produce strange situations. See:

http://h71000.www7.hp.com/doc/82FINAL/6674/6674pro_sm2.html#known_prob_clus_addnodes_h

Bojan
Rick Dyson
Valued Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Full HBVS here between two SANs, but they are interconnected via FC.

This is great stuff! I feel a little guilty taking this easy way out of doing my homework!

rick
Jan van den Ende
Honored Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Easy way out?

Oh no!
It is a very wise move:

You definitely need to set it up the best way possible.
No better way than tthe combined knowledge of the crowd here!

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Lawrence Czlapinski
Trusted Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Rick: If you try autonegotiate, be sure to check what settings you wind up with on both ends. You may need to set either or both the Cisco and the ES40s manually to the GbE option. We have had problems with 100 Mb ethernet where systems didn't autonegotiate correctly with Cisco equipment. Forcing either end to 100 Mb worked. There may be a similiar issue with the GbE.
Lawrence
Rick Dyson
Valued Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Thanks! I too have had that problem from time to time on AS4100, AS1200 and an ES40 all with various 10/100 NICs. Usually just coordinate with network to statically set to 100FD.

rick
Colin Butcher
Esteemed Contributor

Re: Building a New GbE Cluster. Need Suggestions!

Use two totally private dedicated switches to build the dual GigE cluster interconnection links. Not VLANs within a core switch. Use those two GigE ports in each system as the cluster interconnect only with jumbo frames on. Put one dedicated GigE switch at each site and use dedicated fibre interconnects (physical layout depends on exactly where everything is being located). Personally I've had good experience with the Digital Networks smaller GigE switches and they're well tested for this kind of configuration. Remember that this is the system interconnect and that latency is the killer - you want minimal latency, which means avoid things like encapsulating layer two SCS traffic and shipping it over the corporate WAN as IP traffic.

Run your LAN traffic for user connections over multiple 10/100 ports or over other GigE ports. Depending on load separate the traffic over the adapters - split out TCPIP, DECnet, LAT etc. Turn SCS off on the adapters you aren't using for cluster comms. Configure TCPIP to use specific adapters (maybe use failsafe IP, maybe use metric server & load broker, whatever - it depends on your anticipiated traffic and workload).

Configure DECnet similarly (Phase V will do automatic load balancing over all available adapters), but do not enable Phase IV addressing on more than one adapter connected to the same LAN (or VLAN) otherwise you'll end up with a duplicate MAC address.

Configure LAT etc. to use specific adapters.

Consider LAN failover (a VMS thing where all protocols will fail across) as an alternative failover mechanism.

What works best for you will depend on you workload (and locking implications), performance expectations, network latencies, physical layout, availability requirements etc. etc. - don't rush it without a lot of careful planning as fixing it later will not be easy.

Remember the old DEC slogan - "the network is the system" and make sure that you understand how the machines, network and storage all inter-operate. You can't split them out without understanding the implications that each has for the other.

Cheers, Colin.
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).