- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- VMS Cluster design issues
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2004 06:16 AM
03-03-2004 06:16 AM
VMS Cluster design issues
The older machines are GS140, GS60 and GS60e and use a FDDI loop for the internode traffic. They also have a single ethernet NIC and a SAN card for storage.
The machines are using TCP/IP 5.3
We are aware of OpenVMS 7.3-2 but we really do not want to upgrade just yet.
The new ES80 machines have multiple NIC's and no FDDI so Im seriously considering getting rid of the FDDI loop for a pure ethernet solution.
This requires a more fault tolerant ethernet setup, especially for the 3 older nodes.
I know from reading various documentation that 7.3-2 and/or TCP/IP 5.4 have an array of new exiting features for us to use, IP failover (Hot standby ethernet NIC's. Etherchanneling ?) and so on.
So is TCP/IP 5.4 available for 7.3-1 ? Which technologies would be best for me to use with respect to the ethernet configuration ?
Any help will be greatly appreciated :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2004 10:09 AM
03-03-2004 10:09 AM
Re: VMS Cluster design issues
TCP/IP 5.4 is supported on 7.3-1 and 7.3-2.
See http://h71000.www7.hp.com/doc/732FINAL/6524/6524pro.HTML
for an official statement. Not all of the failover technologies might be supported though on the older version.
On a more general note, we did recently replace FDDI as the interconnect in our cluster with Gigabit ethernet cards and Cisco switches. No complains yet.
Greetings, Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2004 02:52 PM
03-03-2004 02:52 PM
Re: VMS Cluster design issues
If this is not an option, then I would look into picking up a network switch that you can set the ports to 100/full instead of Auto-negotiate. Make this a dedicated "HUB" that only your VMS systems are plugged into. This can also be setup for DECNET traffic between the nodes.
On our 2-node clusters, our DBA's setup the crossover cable to be the primary Oracle communications route.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2004 08:22 PM
03-03-2004 08:22 PM
Re: VMS Cluster design issues
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-04-2004 04:37 AM
03-04-2004 04:37 AM
Re: VMS Cluster design issues
FDDI and Gigabit Ethernet (provided your switches really support Jumbo frames) have the advantage of being able to handle larger packets than the 1498-byte payload size that Fast Ethernet can handle. Larger packets are useful for block data transfers, which get used for things like MSCP serving (in case you're doing any of that) and lock remastering operations between nodes. For regular lock requests, which are only about 200 bytes in size and which probably represent the bulk of your inter-node SCS traffic, Fast Ethernet packet sizes are more than sufficient.
You can have a mix of interconnects in use. You can leave the FDDI in place for communications among the existing nodes, using Fast Ethernet to talk between the old and new nodes on the old nodes' one Ethernet adapter, talk among the new nodes with another Ethernet rail, and even bridge the old FDDI to the new 2nd Ethernet rail using a VN9000FX if you want redundant paths and don't want to buy more Ethernet adapters for the old nodes (although Fast Ethernet adapters are very inexpensive now).
There are two models of Gigabit Ethernet adapters, the older DEGPA and the newer DEGXA. The older adapter could run out of steam handling small packets like lock requests, so if you have high lock rates, a DE500 or DE602 could actually be faster. (You can measure lock rates with MONITOR DLOCK, MONITOR CLUSTER, SDA> LCK SHOW ACTIVE, and the LOCK_ACTV tool from the V6 Freeware CD directory [KP_LOCKTOOLS]). The newer DEGXA can handle about 3 times the number of lock requests as a Fast Ethernet adapter. But Gigabit Ethernet adapters and switches are more expensive.
VMS can readily use multiple paths at once, so you could consider using multiple Fast Ethernet rails for now and plan to buy Gigabit Ethernet as the price comes down. But if performance is critical to you today, consider buying Gigabit Ethernet now.
You may also find it useful to compare lock request latencies between the different interconnect types. The LOCKTIME.COM tool from the V6 Freeware CD [KP_CLUSTERTOOLS] can measure this as you turn various links off and on. In general Fast Ethernet should be roughly the same or perhaps a little faster than FDDI (I've measured in the area of 240 microseconds for Fast Ethernet and perhaps 270 for FDDI on a GIGAswitch), and Gigabit Ethernet will be a bit faster (200 microseconds). Links with a cross-over cable and no switch will be faster (e.g. 140 microseconds for Gigabit Ethernet on a cross-over cable). EV7 platforms tend to have significantly lower latencies even with the same LAN adapters, due to their better I/O capabilities.
As you configure the new systems, the tools SHOW_SEGMENTS.COM and SHOW_PATHS_ECS.COM from the V6 Freeware CD directory [KP_CLUSTERTOOLS] can be handy in visualizing and double-checking the network connections between nodes, and seeing which paths VMS prefers between nodes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2004 10:24 PM
03-07-2004 10:24 PM
Re: VMS Cluster design issues
I would like to add a little extra point of attention.
Way back when, when Tom Speake was still the Digital man for Disaster Tolerant Computing, he strongly insisted on the notion that the Cluster Interconnect(s) is essentially your SYSTEM BUS for your COMPUTER SYSTEM (considering the cluster to be ONE system) even if the nodes are dozens of kilometers apart. And he specifically emphasised it in the (then upcoming, now very common) case where the responsibility for "the network" is not in the hands of the VMS system people.... Meaning: insist that you need a DEDICATED network connection for cluster communication. They might get the difference between a network connection and a system bus, even if they look alike.
If I recall our "ip-network" disturbances of the last couple of years, then I am VERY glad we have our dedicated FDDI. We are currently beginning to implement GIG-e. This will probably take the bulk of SCS traffic during normal operation, but we certainly will keep FDDI for fallback. Without it, we certainly would not be 7 years up with a multisite cluster.
So, I agree with Keith: consider investing in FDDI cards. (by the way: are you ADDING your new machines, or doing a rolling REPLACE of your old systems? That is what we did, and we simply re-used our 'old' FDDI cards).
Anyway, SUCCESS!!,
and, if you got it all done, do post a report with your eventual choices, and the story of how difficult (or how smooth) it all went.
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2004 03:07 PM
03-24-2004 03:07 PM
Re: VMS Cluster design issues
An FDDI with 4Kb packets probably out performs a 100Mb NI with 1.4Kb packets but it would be nice to be able to use both!
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2005 04:52 AM
05-27-2005 04:52 AM
Re: VMS Cluster design issues
Now our experience : we have upgraded to 7.3-2 and upto ECO patch LAN V3.0 a cluster.
ECO patch LAN V3.0 should provide the support for a failover on DE500 devices of DS10. The machine is normally configured to run in cluster (VAXCLUSTER=2) using EWA0
and EWB0 devices [incl TCPIP configured on both with two different IP addresses].
We have experienced the configuration of the LAN failover device for other cluster members
(and on other sites too) without trouble.
After we configured on the DS10 the standard devices EWA0 and EWB0 as a new LAN
failover device, the machine boots but
the machine did not join the existing cluster as expected.
Further, since the failover LLA device had been configured for, it has been used by DECnet and TCPIP, but it sounds like SCS refused to work with !
Finally, after a call by HP, it has been troublesome because the Ethernet device were configured autoneg and not 100/full. That's why i reply in this thread, Mike's advice is worth to look at. The console was serial and nothing helpfull appeared there during the boot sequence, just it "forms" a cluster, not joins the existing one.
Eventually an additional trace level would have been helpfull ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-05-2005 12:31 PM
06-05-2005 12:31 PM
Re: VMS Cluster design issues
Using mcr scacp you can set the priority of which path the cluster will use. You can even turn off a path if you wish, but then you lose disastor tolerance or temporary problems in your local network.
scacp is a great way to view your cluster activity.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-08-2005 12:10 AM
06-08-2005 12:10 AM
Re: VMS Cluster design issues
It is best for a gigabit private ethernet for cluster scs communication.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-08-2005 04:21 PM
06-08-2005 04:21 PM
Re: VMS Cluster design issues
"The cluster interconnect is NOT a network component. It is just the _SYSTEMBUS_. A little stretched maybe, but nevertheless.
It should NOT be dealt with by networks people with networks methodologies, but be totally under management and control of the clusters system managers."
For many years now we have lived upon that rule, and on several occasions we have been really glad for it.
Proost.
Have one on me.
jpe