- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- rx2620 - network connections for Cluster/SCS and D...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-02-2006 11:12 AM
05-02-2006 11:12 AM
We have a 2 node cluster (two Itanium rx2620 each with a dual port fiber NIC). These boxes also have 2 copper net connections each. Quorum disk is on the MSA1000.
What I’d like to do is hook these boxes together directly with a crossover cable (i.e. not to a switch) for cluster/SCS and/or Decnet traffic and also connect to the ethernet to redundant switches for IP. I only need Decnet BETWEEN the two nodes (yes, I know that is a bit odd ;-).
1. Is it cool to hook these severs together directly and what ramifications may I have?
2. Which ports should I use for the best reliability and performance?
3. How do I config decent to use the direct connection between the nodes?
4. Do I need something like FailSAFE to make Decent failover if one connection is down?
For example, I could use fiber to hook the boxes together with one or both ports, or I could use copper or any combination thereof. I have 4 net connections on each box so there are a number of possibilities.
Could I, or should I, even hook both fiber ports on each machine together and run SCS over one and Decnet over the other? I realize SCS will decide on its own what connection it will use.
I don't have to use all the ports, obviously, just looking for reliability and performance.
Thanks a ton!!!
Tom
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-02-2006 12:20 PM
05-02-2006 12:20 PM
SolutionA cross over is one of the simpliest most reliable cluster interconnects. Plug it and, make sure the speed/duplex is correct and be happy.
There's a lot of "it depends" here on the remaining questions. With two dual ported NICs I'd look at band width required. DECnet Phase IV or V? It sounds like you have redundant switching available, I'd want to use both NICs. Does your switching support fiber?
LAN failover is configured with the LANCP utility, see System Manager's Manual v 2, chapter 10. http://h71000.www7.hp.com/doc/82FINAL/aa-pv5nj-tk/aa-pv5nj-tk.HTMl
Configure DECnet IV with sys$manager:netconfig.com, DECnet IV with sys$manager:net$configure.com.
http://h71000.www7.hp.com/doc/
You could configure 2 LAN failover devices, one node to node, the second to your switching. For reliablity, I'd want to allocate one port from each NIC to each failover device.
SCS, cluster traffic will just work.
Welcome to the VMS forum.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-02-2006 06:16 PM
05-02-2006 06:16 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
Have lots of inter connects and and you will have plenty of interconnect redundancy. IMO with the perfomance of network switches you could have one logical gigabit ethernet for all you cluster and network communcations. Physically it would be configured to avoid single points of failure due to hard problems.
The days of coaxial able providing cluster interconnects are not missed.
My AUS 2 cents.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-02-2006 06:50 PM
05-02-2006 06:50 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
re: LAN Failover
LAN Failover is not supported for point-to-point connections. Refer to 'LAN Failover restrictions' in the manual cited above.
Maybe use LAN failover (one port from each NIC) to connect to your switches.
Use the point-to-point connections for SCS and DECnet. Multicircuit end nodes should work and may even do load sharing.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-02-2006 08:10 PM
05-02-2006 08:10 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2006 03:39 AM
05-03-2006 03:39 AM
Re: rx2620 - network connections for Cluster/SCS and Decnet
Answers: Decent Phase IV. No serious bandwidth/traffic requirements. Yes, redundant switching is available. BTW, this is 8.2 of VMS.
The comments pretty much confirm my hopes/plans.
I'm thinking to hook the fiber NICs directly together with one port and also hook the copper NICs directly together with a copper crossover.
Then put both node's remaining copper port to one switch and both node's remaining fiber port to another switch.
SCS will take care of itself as folks say, so that leaves me with configuring Decnet IV and TCPIP. As you can tell, I'm not a big network guy. ;-)
I need to go figure out how to check the speed/duplex settings. This is all stock stuff so I presume everything is set the same. Does it need to be full duplex? DUH. ;-) I read that with gigabit, that auto-negotiate on the speed is now recommended.
I would want decnet to pick one or both of the direct connections first, then the ethernet/switch route.
I would want IP to pick fiber first, then copper to/from the switch.
This is really ULTRA redundant and probably overkill but we might as well use what we have.
Thanks for the welcome Andy. I was a long time Deccie. 17 years managing the CSCTLS (Champ/CSC) VMS cluster in the Customer Support Center in Colorado Springs. Then I got layed off - BIG SURPRISE, hahahahaha... I'm on a little one-month contract doing some VMS sys admin, then its back to layoff limbo. ;-)
Ahhh, the beauty of being a nearly extinct dinosaur. ;-) Gimme a holler if you need a big ole useless lizard. ;-)
VMS, RDB, ACMS, SQL, 3GL languages - you know the drill. griesantomjean@msn.com (719)632-6565
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2006 08:40 PM
05-03-2006 08:40 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
Gigabit Ethernet devices are best left at auto-negotiate. Fibre GigE direct to Fibre GigE works fine in auto (or at least it does for the GigE NICs in a pair of DS25s that are directly connected to each other).
LANCP should let you lock the speed & duplex on non-GigE devices. A cross-over cable should be fine, but YMMV.
DECnet-Plus will do a much better job of multiple paths than Phase IV. You'll get load balancing and path failover for the price of the end-node licence. Set Phase V up to use the two paths you want and make sure that they are truly separate, then use Phase IV style addressing on both.
See http://h71000.www7.hp.com/openvms/journal/v5/index.html#decnet for some of the DECnet background.
failsafe IP will let an IP address migrate from one NIC to the other. Set the two IP NICs up with their own specific addresses, then use failsafe IP to manage the "service IP addresses" that people will connect to. Don't think in terms of a single IP address any more - think in terms of one (or more) IP addresses per service that your systems are offering.
See http://h71000.www7.hp.com/openvms/journal/v2/articles/tcpip.pdf for more info on IP.
Remember to disable SCS on adapters where it's not needed, such as the main network. Use SCACP to control SCS.
Given the choice I'd use GigE for the dual SCS connection between the machines, plus DECnet and use the others for IP connectivity to the outside world. Worth configuring silly IP addresses on the cross-linked pair too, if only for someone to be able to use PING as a test.
Any other protocols in use such (eg: LAT)? If so then control which adapters they start up on too.
Cheers, Colin.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2006 09:16 PM
05-03-2006 09:16 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
http://www.openvms.org/phorum/list.php?3
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2006 09:42 PM
05-03-2006 09:42 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
Re: Colin
The RX series will do auto-negotiate. I don't know of a way to lock the speed / duplex at EFI console level. Use LANCP DEFINE DEVICE instead.
Please see the discussion on settings of the network devices in thread
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=675002
Last year, at the Bootcamp, I was told (I think it was Andy, but I'm not sure) that the only way to set the speed and mode of the network devices on an RX-series is using the LANCP utility (at least with VMS).
FWIW,
Kris (aka Qkcl)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2006 09:55 PM
05-03-2006 09:55 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
To embroider slightly on Colin's comment. "Silly" should mean an RFC 1918 (intranet) address block that the site is NOT using. Thus, you will not end up with a addressing or routing problems vis a vis the intranet or the internet in the future.
I also agree with Andy, there is little that can go wrong with a null modem (reversal) cable. The only failures in a steady state are broken connectors. Vermin nibbling on the cable can also be a problem (up to and including fork lifts and backhoes).
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-05-2006 08:01 AM
05-05-2006 08:01 AM
Re: rx2620 - network connections for Cluster/SCS and Decnet
Ive read thru the LAN failover and failSAFE IP documentation/links youve given. thx.
For reference on cabling, it makes sense to me to hook the 2 nodes directly together with both the copper GB and 1 fiber port. And then the copper 10/100 and the other Fiber port on each node to diff switches with the 2 fibers on a single switch and the coppers on a diff switch.
My emphasis here is on simplicity - I wont be around, and there will be others managing this cluster, and Im not a big net-guy myself.
So, I have 3 things to deal with - IP failsafe, LAN failover and Decnet IV. SCS will take care of itself and I was planning on NOT using SCACP to control that just to keep things simple and let it choose what it wants. I will have no performance/bandwidth issues with any of this.
I want the simplest most easily manageable config that will give IP and Decnet failover capability. I dont mind if only SCS runs over the direct node-2-node connection and Decnet/IP both only go thru the switches as long as the configging is easily dealt with for these folks.
FailSAFE seems more complicated than LAN failover, and I believe I can cover both IP and DECNET with LAN failover. What do you guys prefer for simplicity?
PLUS if I use failSAFE then I ALSO have to config either Decnet or LAN failover to cover Decnet.
I dont need a cluster alias for IP. The application isnt clusterable. They run a primary and a hot standby node I only want failover INSIDE each node, not cluster wide. Im leaning toward ONLY using LAN failover.
In addition to simplicity, of course I want reliability (which are often related, hahaha).
I also need this pretty quick and Im having trouble deciding which route to take. Any preferences/ideas guys?
Thanks AGAIN!!!
Tom
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-05-2006 08:04 PM
05-05-2006 08:04 PM
Re: rx2620 - network connections for Cluster/SCS and Decnet
LAN Failover is easy to set up and will protect all network protocols running over the LLc: device from physical failures of the network link (between NW interface and switch port). Only a LINK DOWN will trigger a LAN failover. If the switch would somehow fail and not forward packets anymore, this would not be detected by LLDRIVER.
FailSAFE IP will monitor the bytes received counter of the LAN interface and act upon the fact, that no more bytes are received. This may detect other failure scenarios as well, but it only covers the IP stack from LAN interface failure.
I would vote for LAN failover.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2006 04:52 AM
05-10-2006 04:52 AM
Re: rx2620 - network connections for Cluster/SCS and Decnet
I've read up on it, and also tcpip and decnet and I'm still digesting how I will point these two at the logical LLA0 device that lan failover creates. This is a production system and I can't play around and I'd prefer not to stop the network, but it appears I will have to in order to config.
My plan is to remove the ipconfig settings for my current NIC and then hope that TCPIP$CONFIG.COM will see the new logical LLA0 device all allow me to point the IP address to that.
Thanks again!!
Tom
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2006 05:24 AM
05-10-2006 05:24 AM
Re: rx2620 - network connections for Cluster/SCS and Decnet
to add LAN devices into a LAN failover set, all protocols running on those LAN devices have to be stopped (including SCS). It might be 'tricky', to attempt this in the running system.
You could make all the necessary definitions in the config files on the system disk and activate LAN failover with a single reboot:
$ MC LANCP
LANCP> DEFINE DEVICE LLA/FAILOVER=(EWA,EWB)/ENABLE
$ TCPIP SHOW CONF INT
Note settings
$ TCPIP SET CONF NOINT WE0 ! or similar
$ TCPIP SET CONF INT LE0/... ! same as old int
For DECnet-OSI EDIT NET$CSMACD_STARTUP.NCL and change to ... COMMUNICATION PORT = LLA
For DECnet Phase IV it's NCP DEFINE LINE LLA-0 STATE ON, NCP DEF CIRC LLA-0 STATE ON and use characteristics of existing line/circuit.
If you are running additional protocols on your LAN devices (SDA> SHOW LAN), you might also switch some of them to use LLA. Most network protocols have mechanisms (e.g. logicals to be set) to force them to use a specific LAN device).
To use the standard config procedures, you have to create the LLA device first in the running system. For DECnet, you have to use the @NET$CONFIG ADVANCED option to be able to specify the LLA device to be used.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2006 05:48 AM
05-10-2006 05:48 AM
Re: rx2620 - network connections for Cluster/SCS and Decnet
What you said is basically what I surmised.
I assume you meant:
$ TCPIP SET CONF INT LLA0/... ! same as old int
Instead of .... "LE0" ....
BTW, from ana/sys I have ARP running which I've not heard of (that I recall) and both IP and IPV6:
EIA5 868BD340 Eth 08-00 IP 0015 STRTN,UNIQ,STRTD
EIA6 868BDD40 Eth 08-06 ARP 0015 STRTN,UNIQ,STRTD
EIA7 868BE800 Eth 86-DD IPV6 0015 STRTN,UNIQ,STRTD
My hunch is Decnet will be easy on the running system, but for IP I'll have to set host from the other node and config IP and restart LANACP. If it doesn't work I'll have to use the console and reboot.
Shoot, if I hadn't closed the thread I could give you more points...sorry!!
Thanks TONS again!!!
Tom
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2006 06:08 AM
05-10-2006 06:08 AM
Re: rx2620 - network connections for Cluster/SCS and Decnet
"LE0" is the correct device name syntax for TCPIP - I checked my example config from our rx2600 when doing E8.2 fieldtest !
ARP is the TCPIP Address Resolution Protocol and will not need any extra handling, same for IPV6.
I still think it would be hard to active the LLA device in the running system without a reboot. But you could start with just one LAN interface (without ANY protocol active and connected to your switch) and put it in a LAN failover set as a single LAN device (this works). But as soon as you would change DECnet to use that device, you might have a duplicate MAC address problem...
I vote for preparing everything in the config files and a quick reboot.
Volker.
PS: You should be able to re-open a thread, if you want to ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2006 06:53 AM
05-10-2006 06:53 AM
Re: rx2620 - network connections for Cluster/SCS and Decnet
each with a dual port fiber NIC). These boxes also have 2 copper net connections each
So, _IF_ it would be inconvenient to shutdown/reboot your cluster, then, you _DO_ have a configuration that _WILL_ be able to stay up.
It depends on the relative importance of staying up vs extra effort, but if you (temporarily) route all network traffic through the copper, you can reconfig the Fiber NICs, and then re-rout over the fail-safe pseudo device.
No worry about SCS - it will use ANY available connection.
The net effect will be some performance degradation, and temporarily loss of redundancy.
It can be done, we did it.
hth
Proost.
Have one on me (maybe at the Bootcamp in Nashua?)
jpe