GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- ServiceGuard Lan cards
Operating System - HP-UX
1846710
Members
3369
Online
110256
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2009 12:52 AM
08-28-2009 12:52 AM
Hello,
Our customer has 3 x rp3440 servers running HPUX 11i v1 and SericeGuard A.11.15.
The 3 x nodes forms a cluster managed by S.G.
I have configured the 3 x NICs of each node as the following :
- lan0 is defined as HeartBeat_IP and connexted to first switch (HUB) in an idependant vlan (10.10.1.1)
- lan1 is defined as HeartBeat_IP and connected to a second switch in another vlan
(192.168.1.100) ; this subnet in the package for data traffic
- lan2 is defined as StandBy card connected to the second switch and in the same vlan as lan1 .
NB : no connection between the above 2 x vlans.
1- What will be the cluster situation in case of failure of the second switch (lan1/lan2).
2- What is the best configuration of the 3 x NICs through S.G. and the corresponding physical network connections.
Thanks and Regards
Roger
Our customer has 3 x rp3440 servers running HPUX 11i v1 and SericeGuard A.11.15.
The 3 x nodes forms a cluster managed by S.G.
I have configured the 3 x NICs of each node as the following :
- lan0 is defined as HeartBeat_IP and connexted to first switch (HUB) in an idependant vlan (10.10.1.1)
- lan1 is defined as HeartBeat_IP and connected to a second switch in another vlan
(192.168.1.100) ; this subnet in the package for data traffic
- lan2 is defined as StandBy card connected to the second switch and in the same vlan as lan1 .
NB : no connection between the above 2 x vlans.
1- What will be the cluster situation in case of failure of the second switch (lan1/lan2).
2- What is the best configuration of the 3 x NICs through S.G. and the corresponding physical network connections.
Thanks and Regards
Roger
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2009 01:31 AM
08-28-2009 01:31 AM
Re: ServiceGuard Lan cards
1- As you define lan0 as heartbeat then your cluster will function normally. The package state will depend on your config, your package may be down if you monitor your subnet. Using one switch for data network means a SPOF , so you may think to use redundant switches also...
2-)If you have 3 cards, i think the best way is using 1 card for cluster interconnect, and the other two for data network.
2-)If you have 3 cards, i think the best way is using 1 card for cluster interconnect, and the other two for data network.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2009 01:51 AM
08-28-2009 01:51 AM
Solution
1.) The lan1/lan2 switch is a Single Point of Failure in your current configuration. If it fails, the applications of the cluster will be inaccessible (isolated from the rest of your network).
However, because the cluster heartbeat has an alternate route, the cluster will not start failing packages over nor rebooting nodes, so the cluster will remain ready to resume service as soon as the switch is fixed.
If your switch has enough built-in fault tolerance (multiple switch port modules, controllers & power supplies) so that a failure of the entire switch is unlikely enough for your purposes, this may be acceptable.
2.) An independent heartbeat connection is always a good thing, so your lan0 configuration is good.
Without getting more hardware, I don't think you can improve your lan1/lan2 configuration.
Getting another switch for data traffic would improve fault tolerance: you would trunk the two data traffic switches together, then connect lan1's from all nodes to one switch and lan2's to the other. With this configuration, the failure of one switch becomes survivable:
- heartbeat switch failure: no problem, heartbeat goes through the data subnet too.
- lan1 switch failure: no problem, all nodes failover to lan2 and keep serving clients; heartbeat on lan1 fails over to lan2 too.
- lan2 switch failure: just like lan1 switch
NIC failures are no problem either:
- lan0 NIC failure in any node: no problem, heartbeat on data subnet allows the system to keep running normally until the next scheduled maintenance break.
- lan1 or lan2 NIC failure in any node: no problem, that node just fails over to the other NIC, and the trunk connection between the data switches allows the data to pass from one switch to the other.
If your nodes allow On-Line Replacement of NICs, you could even replace them without stopping any of the nodes.
NOTE: Serviceguard A.11.15 is obsolete, and HPUX 11i v1 is approaching its end-of-life. To ensure a painless upgrade in the future, you should first upgrade to the latest version of Serviceguard available for 11i v1 as soon as convenient. The newer versions will have supported upgrade paths to newer OS versions.
You can upgrade Serviceguard as a rolling upgrade (one node at a time), but you cannot make any cluster configuration changes while the nodes are not all at the same Serviceguard version.
MK
However, because the cluster heartbeat has an alternate route, the cluster will not start failing packages over nor rebooting nodes, so the cluster will remain ready to resume service as soon as the switch is fixed.
If your switch has enough built-in fault tolerance (multiple switch port modules, controllers & power supplies) so that a failure of the entire switch is unlikely enough for your purposes, this may be acceptable.
2.) An independent heartbeat connection is always a good thing, so your lan0 configuration is good.
Without getting more hardware, I don't think you can improve your lan1/lan2 configuration.
Getting another switch for data traffic would improve fault tolerance: you would trunk the two data traffic switches together, then connect lan1's from all nodes to one switch and lan2's to the other. With this configuration, the failure of one switch becomes survivable:
- heartbeat switch failure: no problem, heartbeat goes through the data subnet too.
- lan1 switch failure: no problem, all nodes failover to lan2 and keep serving clients; heartbeat on lan1 fails over to lan2 too.
- lan2 switch failure: just like lan1 switch
NIC failures are no problem either:
- lan0 NIC failure in any node: no problem, heartbeat on data subnet allows the system to keep running normally until the next scheduled maintenance break.
- lan1 or lan2 NIC failure in any node: no problem, that node just fails over to the other NIC, and the trunk connection between the data switches allows the data to pass from one switch to the other.
If your nodes allow On-Line Replacement of NICs, you could even replace them without stopping any of the nodes.
NOTE: Serviceguard A.11.15 is obsolete, and HPUX 11i v1 is approaching its end-of-life. To ensure a painless upgrade in the future, you should first upgrade to the latest version of Serviceguard available for 11i v1 as soon as convenient. The newer versions will have supported upgrade paths to newer OS versions.
You can upgrade Serviceguard as a rolling upgrade (one node at a time), but you cannot make any cluster configuration changes while the nodes are not all at the same Serviceguard version.
MK
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2009 02:16 AM
08-28-2009 02:16 AM
Re: ServiceGuard Lan cards
Shalom Roger,
What you have asked for is an opinion, do not expect unity.
If I had two nodes, 3 NIC each.
Two NIC on the Corporate LAN bonded with Auto Port Aggregation(APA). NIC three on a private hub. Two heartbeats one configured Corporate, 1 configured on the private LAN
SEP
What you have asked for is an opinion, do not expect unity.
If I had two nodes, 3 NIC each.
Two NIC on the Corporate LAN bonded with Auto Port Aggregation(APA). NIC three on a private hub. Two heartbeats one configured Corporate, 1 configured on the private LAN
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2026 Hewlett Packard Enterprise Development LP