Operating System - HP-UX
1759696 Members
3540 Online
108886 Solutions
New Discussion юеВ

Inconsistent subnetting information

 
SOLVED
Go to solution
I Hawley
Occasional Advisor

Inconsistent subnetting information

Hi guys,

I'm trying to set up a floating IP address on my serviceguard cluster under 11.23 and i'm getting this error:

Apr 25 12:21:10 - Node "lcsf1": Adding IP address 172.22.2.143 to subnet 255.255
.252.0
cmmodnet: Subnet 255.255.252.0 is not a configured subnet.
cmmodnet: Use the "netstat -in" command to list the configured subnets.

....and it's right...


# netstat -in
IPv4:
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lan2* 1500 none none 0 0 0 0 0
lan0 1500 172.22.0.0 172.22.2.140 48708 0 4913 0 0
lo0 4136 127.0.0.0 127.0.0.1 2816 0 2816 0 0

IPv6:
Name Mtu Address/Prefix Ipkts Opkts
lan2* 1500 none 0 0
lo0 4136 ::1/128 1891 1891
#

...and yet...

# ifconfig lan0

lan0: flags=1843
inet 172.22.2.140 netmask fffffc00 broadcast 172.22.3.255
#


so my question is, how can "netstat -in" say 172.22.0.0 for lan0 and yet ifconfig tells us that lan0 is configured with 255.255.252.0???

It doesn't make any sense to me....am I missing something?

cheers,

Ian.
9 REPLIES 9
Steve Lewis
Honored Contributor

Re: Inconsistent subnetting information

It sounds like you accidentally put the netmask into the Serviceguard ASCII file under monitored networks where it was actually expecting the network 172.22.2.140

Steve
I Hawley
Occasional Advisor

Re: Inconsistent subnetting information

Thanks Steve,

I've checked the ascii file and there's no netmask at all in there. Maybe there should be. I wasn't even going to mention the cluster because I knew it would possibly distract people. I'm perplexed as to why "netstat -in" says that lan0 has a subnet of 172.22.0.0, when lan0 is set to ffffc000?

cheers,

Ian

I Hawley
Occasional Advisor

Re: Inconsistent subnetting information

Sorry, I mean fffffc00 as in 255.255.252.0
I Hawley
Occasional Advisor

Re: Inconsistent subnetting information

For completeness, the cluster configuration file...

# cmgetconf
# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************

# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.

CLUSTER_NAME chips_cluster


# Cluster Lock Parameters
# The cluster lock is used as a tie-breaker for situations
# in which a running cluster fails, and then two equal-sized
# sub-clusters are both trying to form a new cluster. The
# cluster lock may be configured using only one of the
# following alternatives on a cluster:
# the LVM lock disk
# the quorom server
#
#
# Consider the following when configuring a cluster.
# For a two-node cluster, you must use a cluster lock. For
# a cluster of three or four nodes, a cluster lock is strongly
# recommended. For a cluster of more than four nodes, a
# cluster lock is recommended. If you decide to configure
# a lock for a cluster of more than four nodes, it must be
# a quorum server.

# Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and
# FIRST_CLUSTER_LOCK_PV parameters to define a lock disk.
# The FIRST_CLUSTER_LOCK_VG is the LVM volume group that
# holds the cluster lock. This volume group should not be
# used by any other cluster as a cluster lock device.

# Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL,
# and QS_TIMEOUT_EXTENSION parameters to define a quorum server.
# The QS_HOST is the host name or IP address of the system
# that is running the quorum server process. The
# QS_POLLING_INTERVAL (microseconds) is the interval at which
# Serviceguard checks to make sure the quorum server is running.
# The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase
# the time interval after which the quorum server is marked DOWN.
#
# The default quorum server timeout is calculated from the
# Serviceguard cluster parameters, including NODE_TIMEOUT and
# HEARTBEAT_INTERVAL. If you are experiencing quorum server
# timeouts, you can adjust these parameters, or you can include
# the QS_TIMEOUT_EXTENSION parameter.
#
# The value of QS_TIMEOUT_EXTENSION will directly effect the amount
# of time it takes for cluster reformation in the event of failure.
# For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster
# reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION
# was set to 0. This delay applies even if there is no delay in
# contacting the Quorum Server. The recommended value for
# QS_TIMEOUT_EXTENSION is 0, which is used as the default
# and the maximum supported value is 30000000 (5 minutes).
#
# For example, to configure a quorum server running on node
# "qshost" with 120 seconds for the QS_POLLING_INTERVAL and to
# add 2 seconds to the system assigned value for the quorum server
# timeout, enter:
#
# QS_HOST qshost
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000

FIRST_CLUSTER_LOCK_VG /dev/vgchips


# Definition of nodes in the cluster.
# Repeat node definitions as necessary for additional nodes.
# NODE_NAME is the specified nodename in the cluster.
# It must match the hostname and both cannot contain full domain name.
# Each NETWORK_INTERFACE, if configured with IPv4 address,
# must have ONLY one IPv4 address entry with it which could
# be either HEARTBEAT_IP or STATIONARY_IP.
# Each NETWORK_INTERFACE, if configured with IPv6 address(es)
# can have multiple IPv6 address entries(up to a maximum of 2,
# only one IPv6 address entry belonging to site-local scope
# and only one belonging to global scope) which must be all
# STATIONARY_IP. They cannot be HEARTBEAT_IP.


NODE_NAME lcsf1
NETWORK_INTERFACE lan0
STATIONARY_IP 172.22.2.140
NETWORK_INTERFACE lan2
HEARTBEAT_IP 10.1.1.1
FIRST_CLUSTER_LOCK_PV /dev/dsk/c10t0d1
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0

# Primary Network Interfaces on Bridged Net 1: lan0,lan2.
# Warning: There are no standby network interfaces on bridged net 1.

NODE_NAME lcsf2
NETWORK_INTERFACE lan0
STATIONARY_IP 172.22.2.141
NETWORK_INTERFACE lan2
HEARTBEAT_IP 10.1.1.2
FIRST_CLUSTER_LOCK_PV /dev/dsk/c9t0d1
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0

# Primary Network Interfaces on Bridged Net 1: lan0,lan2.
# Warning: There are no standby network interfaces on bridged net 1.


# Cluster Timing Parameters (microseconds).

# The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds).
# This default setting yields the fastest cluster reformations.
# However, the use of the default value increases the potential
# for spurious reformations due to momentary system hangs or
# network load spikes.
# For a significant portion of installations, a setting of
# 5000000 to 8000000 (5 to 8 seconds) is more appropriate.
# The maximum value recommended for NODE_TIMEOUT is 30000000
# (30 seconds).

HEARTBEAT_INTERVAL 1000000
NODE_TIMEOUT 2000000


# Configuration/Reconfiguration Timing Parameters (microseconds).

AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000

# Network Monitor Configuration Parameters.
# The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected.
# If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound
# message count stops increasing or when both inbound and outbound
# message counts stop increasing.
# If set to INOUT, both the inbound and outbound message counts must
# stop increasing before the card is considered down.
NETWORK_FAILURE_DETECTION INOUT

# Package Configuration Parameters.
# Enter the maximum number of packages which will be configured in the cluster.
# You can not add packages beyond this limit.
# This parameter is required.
MAX_CONFIGURED_PACKAGES 150


# Access Control Policy Parameters.
#
# Three entries set the access control policy for the cluster:
# First line must be USER_NAME, second USER_HOST, and third USER_ROLE.
# Enter a value after each.
#
# 1. USER_NAME can either be ANY_USER, or a maximum of
# 8 login names from the /etc/passwd file on user host.
# 2. USER_HOST is where the user can issue Serviceguard commands.
# If using Serviceguard Manager, it is the COM server.
# Choose one of these three values: ANY_SERVICEGUARD_NODE, or
# (any) CLUSTER_MEMBER_NODE, or a specific node. For node,
# use the official hostname from domain name server, and not
# an IP addresses or fully qualified name.
# 3. USER_ROLE must be one of these three values:
# * MONITOR: read-only capabilities for the cluster and packages
# * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages
# in the cluster
# * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative
# commands for the cluster.
#
# Access control policy does not set a role for configuration
# capability. To configure, a user must log on to one of the
# cluster's nodes as root (UID=0). Access control
# policy cannot limit root users' access.
#
# MONITOR and FULL_ADMIN can only be set in the cluster configuration file,
# and they apply to the entire cluster. PACKAGE_ADMIN can be set in the
# cluster or a package configuration file. If set in the cluster
# configuration file, PACKAGE_ADMIN applies to all configured packages.
# If set in a package configuration file, PACKAGE_ADMIN applies to that
# package only.
#
# Conflicting or redundant policies will cause an error while applying
# the configuration, and stop the process. The maximum number of access
# policies that can be configured in the cluster is 200.
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE FULL_ADMIN


# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02

VOLUME_GROUP /dev/vgchips
#
Solution

Re: Inconsistent subnetting information

255.255.252.0 is not a subnet, it's a subnet mask that applied to the subnet 172.22.0.0 allow a range of network addresses from 172.22.0.1 to 172.22.3.254 with broadcast 172.22.3.255.

cmmodnet doesn't accept subnet masks, just ip address and subnet, so you only need to provide 172.22.2.140 & 172.22.0.0

Re: Inconsistent subnetting information

When does the problem happen?
Does it happen when you exec cmmodnet or when starting the package?
I Hawley
Occasional Advisor

Re: Inconsistent subnetting information

See? told you I was being stupid!!! :-)Thanks, that's completely fixed my problem!

cheers,

Ian
I Hawley
Occasional Advisor

Re: Inconsistent subnetting information

Alfredo,

It was when I was starting the package. I have just shown my ignorance of subnetting and will be off to read a book!

Its all good here now.

thanks,

Ian

Re: Inconsistent subnetting information

Don't worry, be happy because it's nothing more complicated than that.

Now you know something more about subnetting.

Regards,

Alfredo