Operating System - OpenVMS
1752370 Members
6039 Online
108787 Solutions
New Discussion юеВ

may I use FDDI via gigabit sw to make VMS cluster

 
Moca_Huang
Occasional Contributor

may I use FDDI via gigabit sw to make VMS cluster

I would like use my FDDI lan card to make VMS cluster.
But my inter switch connect is use Gigabit ethernet.
Does SCS protocol support FDDI via Gigabit switch then go back FDDI interface.
Thanks.
6 REPLIES 6
├Еge R├╕nning
Trusted Contributor

Re: may I use FDDI via gigabit sw to make VMS cluster

I'm not sure I understand your network configuration. FDDI cards can not be connected directly to a GB switch. It seems like a better solution to get new Gb NICs for the VMS systems and get easier configuration and better performance(if your OS version supports it).

If your FDDI card is connected to a FDDI concentrator or similar and that is part of your Gb backbone I would expect SCS to work as long as it's a valid/supported network topology and network equipment is not set to block SCS protocol.

It might be a good idea to get your local HP support to check your network topology and give their recomendations.
VMS Forever
Lokesh_2
Esteemed Contributor

Re: may I use FDDI via gigabit sw to make VMS cluster

Hi ,

I have managed a FDDI based cluster. The FDDI's were connected to FDDI concentrator.

Best regards,
Lokesh
What would you do with your life if you knew you could not fail?
Martin P.J. Zinser
Honored Contributor

Re: may I use FDDI via gigabit sw to make VMS cluster

Hello,

both FDDI and Gigabit are supported cluster interconnects (in fact we just moved from FDDI to GB as interconnect in our cluster). What you will need to address is the transition from FDDI to GB (and back) if you do want to use a mixed setup
Keith Parris
Trusted Contributor

Re: may I use FDDI via gigabit sw to make VMS cluster

This type of configuration can indeed work, it is supported according to the SPD, and there are customer sites with this type of configuration in use.

The Gigabit Ethernet switches must be able to bridge the SCS protocol. A pure IP router will not work -- it must be a bridge or bridge/router.

If you are using "large" packets on FDDI (i.e. your setting for the SYSGEN parameter NISCS_MAX_PKTSZ is greater than 1498) then more care will be required.

In versions of VMS prior to 7.3, VMS depended on the clearing of a bit in the FDDI header when a packet traversed from FDDI to Ethernet to let the cluster software know that packets larger than standard 1498-byte Ethernet size could not pass through that path (but typically only DIGITAL -- now dnpg.com -- switches cleared that bit as expected.) So you could have problems trying to use packet sizes larger than 1498 bytes if the switches don't act as the cluster software expected.

In VMS version 7.3 and above, PEDRIVER actively probes with different-sized packets to ascertain the actual maximum size of packet which gets through, which is a much better method than trying to rely on switch behavior that is not standardized.

Gigabit Ethernet can in theory support Jumbo packets of up to 9KB in size (twice the maximum size for FDDI) but not all switch hardware actually supports the full maximum packet length the standard allows. For example, some switches only support 2000-byte "Mini-Jumbo" packtes. For more info for Cisco switches, see http://www.cisco.com/warp/public/473/148.html . If your Gigabit Ethernet equipment cannot support full-size Jumbo frames, you'll have to settle for lower values of NISCS_MAX_PKTSZ, perhaps even as low as the default setting of 1498. Having to move from 4,474-byte FDDI packets to 1,498-byte Ethernet packets will result in more interrupt-state overhead on the Primary CPU on systems if there was a lot of use of large packets. Note that if larger-size packets get dropped, connections may still be able to be formed between cluster nodes initially (as that uses small packets), but then problems like closed virtual circuits will occur later when load is applied. Things which can use larger-size packets include MSCP-served disk transfers (and other large block data transfers).

Potential problems to watch out for include excessive latency and excessive packet loss. I'd recommend checking your existing configuration's round-trip lock latency with the LOCKTIME.COM tool from http://encompasserve.org/~parris/kp_locktools_v6.bck so you'll have a baseline to which to compare the performance in the new configuration. I'd also check the ReXmt and ReRcv counters in SDA (or the equivalents in SCACP for 7.3 and above) to check for retransmitted or duplicated packets. See REXMT.COM from [KP_CLUSTERTOOLS] on the V5 Freeware CD for an example of looking at these counters under SDA.

I'd also make sure you had the latest version of any patch kits with LAN driver or PEDRIVER content.
Moca_Huang
Occasional Contributor

Re: may I use FDDI via gigabit sw to make VMS cluster

Does anoyone have a real practice in this kind of config:
All network device we use is switch/bridge.
We try to make a VMS cluster in this kind of architecture.
Our VMS machine have FDDI interface card and connect to different CISCO CATALYST 5500 switch. We connect Gigabitethernet switch between both of the switchs.
So I must make sure cluster function will work properly.

Thanks for you are kindly support in this case.
Richard W Hunt
Valued Contributor

Re: may I use FDDI via gigabit sw to make VMS cluster

As far back as 1994 we had FDDI cluster through HS241J and a Gigaswitch. We also used token ring, not Ethernet so I cannot speak to the Ethernet issue for SCS.

We had two x HSJ with multi-pathed disks ranging from 2 to 9 Gb, maybe about 40 such disks after shadowing. Each HSJ had one Alpha 1000 CPU running OpenVMS 6.2-1H2, 64 Mby RAM each, both acting as MSCP servers, plus one local system disk for booting, swap, and paging activities.

We also had 3 x Alpha 2100 (4 CPU per box) with 3 x DSSI bus local disks for boot, page, & swap ops, but all other disks were MSCP served from the Alpha 1000s through FDDI in twin single-attached configuration (as opposed to one dual-attached FDDI, which took up the same number of slots).

The twin FDDI token rings (aligned to be counter-rotating with respect to the token) gave us two things. First, since they were independent paths, there was no single point of failure. Second, dual-attached paths peaked at 160 mbps but twin single-attached paths peaked at 200 mbps.

This system ran like a champ except that the extra layer between the disks (behind the Alpha 1000s) and the Alpha 2100s (where the applications ran) was a bit slower than some folks liked. But on the other hand, it was still 5 to 20 times faster than the system that it replaced (on VAXen), so I guess it is all relative.

As to setting it up, we had some trouble until the HS241-J was released because that was the first controller in the HSJ class that supported multi-pathing (another part of the "no single point of failure" issue, which was a major hot button for our upper management team). Once that was set up with the right network description, we didn't have to do anything much with it.

We also had to be careful about rebooting it because at least one of the Alpha 1000s had to come up first to support MSCP for the other disks. We also COULD NOT boot from the HS241J even if we had wanted to. At least, not until a much later release of the firmware in the HSJ, and by then we didn't care at all.
Sr. Systems Janitor