HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- TCPIP setup for high-availability Itanium cluster
Operating System - OpenVMS
1827849
Members
1744
Online
109969
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2005 07:03 PM
11-09-2005 07:03 PM
TCPIP setup for high-availability Itanium cluster
Hi all,
Greetings from sunny New Zealand.
I'm looking for suggestions and input for a Multi-Site cluster configuration. There are three nodes; 2xRX2620s, each has 4 (yes, four - we didn't realize when ordering that they came with 2 on-board!) Gigabit ethernet ports; and an AlphaServer DS10. DS10 is there as a vote provider/tie-breaker only, all three servers are in different buildings. There are two networks between the servers; a public one (.10 subnet), and a private one. I will be using volume shadowing between the two rx2620s, so I ideally want as much cluster and volshad traffic as possible to go over the private network, but if necessary to fail over to the public. Multi-site clustering is required as this is a real 24x7 operation, and I can't let anything as minor as a plane crash, meteorite strike or civil unrest stop production.
I've been looking at load brokering, failSAFE etc, but just wondered if anyone has set up a similar environment, and how they set up the IP addresses (the number of addresses is not an issue, by the way, I have my own subnet to play with), cluster aliases, public and private addresses etc.
VMS, TCP/IP versions all the latest; applications will include RDB and MessageQ.
[BTW I've already read Matt Muggeridge's excellent paper on high availability TCPIP.]
Thanks in advance.
Paul Jerrom.
Greetings from sunny New Zealand.
I'm looking for suggestions and input for a Multi-Site cluster configuration. There are three nodes; 2xRX2620s, each has 4 (yes, four - we didn't realize when ordering that they came with 2 on-board!) Gigabit ethernet ports; and an AlphaServer DS10. DS10 is there as a vote provider/tie-breaker only, all three servers are in different buildings. There are two networks between the servers; a public one (.10 subnet), and a private one. I will be using volume shadowing between the two rx2620s, so I ideally want as much cluster and volshad traffic as possible to go over the private network, but if necessary to fail over to the public. Multi-site clustering is required as this is a real 24x7 operation, and I can't let anything as minor as a plane crash, meteorite strike or civil unrest stop production.
I've been looking at load brokering, failSAFE etc, but just wondered if anyone has set up a similar environment, and how they set up the IP addresses (the number of addresses is not an issue, by the way, I have my own subnet to play with), cluster aliases, public and private addresses etc.
VMS, TCP/IP versions all the latest; applications will include RDB and MessageQ.
[BTW I've already read Matt Muggeridge's excellent paper on high availability TCPIP.]
Thanks in advance.
Paul Jerrom.
Have fun,
Peejay
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If it can't be done with a VT220, who needs it?
Peejay
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If it can't be done with a VT220, who needs it?
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2005 11:34 PM
11-09-2005 11:34 PM
Re: TCPIP setup for high-availability Itanium cluster
Paul,
forget about IP clustering, as that is just a crude Unixian failover with ALL cluster connection to exactly ONE node, and failover only upon node failure.
Use DNS round-robin or (preferably) metric+loadbroker.
Have each pair of nodes connected to the others over at least two GEOGRAPHICALLY INDEPENDENT network lines.
For 24 x 7, or even 24 x 365.25 operation, you probably also need some way of rolling upgrade of your applics.
We implement this by having a separate service name for each application, which is devided by round-robin over every node (normally 4 for us) that offers that service.
Planned upgrades are done by taking one node for that service out of the round-robin.
User sessions are limited to 10 hours, so after that the node is free of that app.
On THAT node we perform the upgrade, and the verification. If all is OK, we move the app over the the new-node version. Depending on the acceptability of simultanuous running of the versions, we do or do not kill the user session on the old version. After all old-version sessions are gone, we re-install round-robin.
This allows for NO (simultanuous multiversion allowed) or minimal (break running sessions & restart for no multiversion) applic interrupion.
hth,
Proost.
Have one on me.
jpe
forget about IP clustering, as that is just a crude Unixian failover with ALL cluster connection to exactly ONE node, and failover only upon node failure.
Use DNS round-robin or (preferably) metric+loadbroker.
Have each pair of nodes connected to the others over at least two GEOGRAPHICALLY INDEPENDENT network lines.
For 24 x 7, or even 24 x 365.25 operation, you probably also need some way of rolling upgrade of your applics.
We implement this by having a separate service name for each application, which is devided by round-robin over every node (normally 4 for us) that offers that service.
Planned upgrades are done by taking one node for that service out of the round-robin.
User sessions are limited to 10 hours, so after that the node is free of that app.
On THAT node we perform the upgrade, and the verification. If all is OK, we move the app over the the new-node version. Depending on the acceptability of simultanuous running of the versions, we do or do not kill the user session on the old version. After all old-version sessions are gone, we re-install round-robin.
This allows for NO (simultanuous multiversion allowed) or minimal (break running sessions & restart for no multiversion) applic interrupion.
hth,
Proost.
Have one on me.
jpe
Don't rust yours pelled jacker to fine doll missed aches.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-10-2005 03:26 AM
11-10-2005 03:26 AM
Re: TCPIP setup for high-availability Itanium cluster
Hello from San Diego.
You can use SCACP to set priority for cluster traffic. You don't need to configure anything, out of the box this just works for a network interface cluster.
For the two networks, you should make sure that there is redundant physical networking equipment supporting connectivity. Two VLANs on the same switch are not redundant.
I would consider using LAN failover on your public network, you can combine LAN failover and failSAFE IP. I'd want to have a "service" address or addresses and a dedicated management address for each system.
As Jan says pass on cluster alias.
What sort of storage are you planning?
Andy
If you don't have time to do it right, when will you have time to do it over? Reach me at first_name + "." + last_name at sysmanager net
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP