Operating System - HP-UX
1753519 Members
5134 Online
108795 Solutions
New Discussion юеВ

Re: Service Guard Cluster Network config

 
SOLVED
Go to solution
steven Burgess_2
Honored Contributor

Service Guard Cluster Network config

Hi Everyone

Have posted this is the network section, sticking it here as well

We are putting a cluster between geographically separate datacenters linked by a 100Mb LAN Extension service. As far as routing is concerned can anyone advise on the router configuration between the sites, ie extending the vlan or creating an uplink/trunk of some sort. The link will not be dedicated, but shared between customers

TIA

Steve
take your time and think things through
6 REPLIES 6
Todd Whitcher
Esteemed Contributor
Solution

Re: Service Guard Cluster Network config

A couple things I can think of.

1. With VLAN's, make sure they are not set up w/ ACL's to filter anything at layer 2 or 3.

From HP Doc KBRC00012899

ServiceGuard uses the same functions as the linkloop command to
communicate at the physical layer, but with added encapsulation. The
linkloop command and "dlpiping" command that Serviceguard utilizes, are
close in what they are doing, both testing at layer 2. What is different about
dlpiping is it uses the same SAP and SNAP binding that the daemons the
ServiceGuard application uses. The problem some customers run into is with
access control lists on VLANS.

Here is the breakdown:

Example Mac it uses to test, this is for lan18

Mac Addr:
0x00306E4659FC 18 UP lan18

./dlpiping adds this to frame:

0x00306e4659fcaa080009167f

0x00306e4659fc ( aa 080009167f )

SNAP encapsulation (0xaa) of type 0x080009167f protocol identifier
cl_comm_snap

Filtering set up in the access control lists can prohibit this type of
data from passing. ServiceGuard does not support the filtering of this type
traffic between nodes in a cluster. The network should be transparent at level-
2 (Data link layer) and level-3 (IP layer) between nodes in the cluster.

2. The Continental Cluster ServiceGuard version is the only one that supports Multiple IP Subnets per cluster. SG supports multiple subnets within the cluster, but all nodes must have access to the subnets. Only continetnal cluster allows nodes to have independant subnets. The reason for this is that SG uses link level communication between nodes which does not go across routers.
Geoff Wild
Honored Contributor

Re: Service Guard Cluster Network config

As Todd mentioned - both sites must have access to the same subnets...

More info on networking:

http://docs.hp.com/en/B3936-90079/ch03s05.html

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Greg OBarr
Regular Advisor

Re: Service Guard Cluster Network config

I setup two VLANS on the Cisco switch - one for the TCP/IP network access to the database and one "private" VLAN just for the heartbeat TCP/IP network and heartbeat have to be on two different subnets. So customers access the database and server via the TCP/IP VLAN on lan2, which also has the default route. lan1 is connected to the switch and only the two cluster machines have access to that VLAN. No default route is needed for lan1 since they're on the same subnet and nobody can connect to these interfaces from the outside unless they configure another port on the switch on that private VLAN. It works well for me.

steven Burgess_2
Honored Contributor

Re: Service Guard Cluster Network config

Thanks everyone,

Greg, what about the routing configuration between the 2 sites ?

TIA

Steve
take your time and think things through
Greg OBarr
Regular Advisor

Re: Service Guard Cluster Network config

I am running a two node cluster with Oracle as the only package. Through the Cisco switch configuration, both systems, even though they are in different locations, are on the same subnets - one subnet/VLAN for the TCP/IP network I/O and a different "private" subnet/VLAN exclusively for the second heartbeat IP. Since nobody will be interacting with the systems via the exclusive heartbeat address, I don't need to setup a route for this address - a route is automatically setup for the subnet when the interface is configured with ifconfig, as seen in the output of "netstat -rn":

[ catest01 ]# netstat -rn
Routing tables
Destination Gateway Flags Refs Int Pmtu
127.0.0.1 127.0.0.1 UH 0 lo0 4136
192.168.2.11 192.168.2.11 UH 0 lan0 4136
172.28.229.11 172.28.229.11 UH 0 lan2 4136
172.28.229.13 172.28.229.13 UH 0 lan2:1 4136
172.28.229.0 172.28.229.11 U 3 lan2 1500
172.28.229.0 172.28.229.13 U 3 lan2:1 1500
192.168.2.0 192.168.2.11 U 2 lan0 1500
127.0.0.0 127.0.0.1 U 0 lo0 0
default 172.28.229.1 UG 0 lan2 0

In the output from above, 192.168.2.11 is my exclusive heartbeat address on this system. Note that there is no route other than to the 192.168.2.0 subnet from the lan0 interface. On the other node, the exclusive heartbeat address is 192.168.2.12, and there is also no route other than the one that's automatically configured when you enable the interface with ifconfig (or rather the startup script does this via the /etc/rc.config.d/netconf entries). Since both systems are on the same subnet, they can communicate with eath other (ping, telnet, etc). But since there is no route outside of the subnet, nobody from the outside can come in through that interface. The Cisco is also setup with no routes into or out of this subnet. Only these two systems use this subnet. This is good for security purposes.

But the lan2 interface is what everyone uses to access the system, so there is a default route setup to 172.28.229.1, the gateway for the 172.28.229.0 subnet. The lan2:1 "virtual" interface is configured by the cluster and follows the package from node to node if it switches from one node to the other.
steven Burgess_2
Honored Contributor

Re: Service Guard Cluster Network config

Thats Great, thanks for your help Greg

Steve
take your time and think things through