Operating System - HP-UX
1838282 Members
3069 Online
110125 Solutions
New Discussion

Re: cmapplyconf will not work

 
SOLVED
Go to solution
rvent
Frequent Advisor

cmapplyconf will not work

Hello,

The set up: Serviceguard cluster with 2 nodes ( hp1 and hp2). We used the respective ignite tape on both of them because of a HDD crashed. Everything seems to be working fine, but when we try to apply a new config to serviceguard it always comes back with an error even when cmcheckconf does complaint about it. Here is the error:

Aug 23 08:11:33 hp1 CM-CMD[15477]: cmapplyconf -C cl5.ascii
Aug 23 08:11:33 hp1 cmclconfd[15478]: Querying volume group /dev/vg00 for node unix-1-10
Aug 23 08:11:33 hp1 cmclconfd[15478]: Volume group /dev/vg00 has no cluster attributes
Aug 23 08:11:33 hp1 cmclconfd[15478]: Querying volume group /dev/vg01 for node unix-1-10
Aug 23 08:11:42 hp1 cmclconfd[15478]: WARNING: User root from node hp1-10 (ip address 10.0.0.40) does not have privileges to access this node. Either they are coming from a node without enhanced security or somebody may be attempting un-authorized access to this system.

I checked the .rhosts, /etc/cmnodelist, /etc/cmcluster/cmclnodelist on boths servers and they show boths server ( hp1 and hp1-10, hp2 and hp2-10 ). The ascii file also shows root to have full control on any node in the cluster, but yet I cant cmapplyconf.

I know that once the cluster is configures it wont check for the .rhosts or cmnodelist, but since both systems were ignited it might be a little different..

Any ideas…?

Thanks
15 REPLIES 15
Steven E. Protter
Exalted Contributor

Re: cmapplyconf will not work

Shalom,

This is a security issue and I do not beleive the message is going to be helpful in diagnosis.

Here are my thoughts.

If you use .rhosts for security, take a look and make sure the root user is permitted between the two systems.

Consider going with SG 11.16 and its cmnodelist method of doing security.

I'm not sure your last statement is corect either.

Make sure you don't have a binary cluster configuration file from another node.

Sometimes merely trying the cluster configuration from the other node bypasses this issue.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
IT_2007
Honored Contributor

Re: cmapplyconf will not work

did you tried to do rlogin from one node to another without password? If it is success then try again cmcheckconf and applyconf.
nanan
Trusted Contributor

Re: cmapplyconf will not work

Hi rvent

I would say to check /etc/hosts files
If already you made sure of the contents .rhosts and cmclnodelist, of cause the files mode was correct, wasn't it?

Just security problem
Regards
nanan
rvent
Frequent Advisor

Re: cmapplyconf will not work

Steven

The cmcheckconf and cmapplyconf were tried from both servers and the same error was reported.


Srini

Yes i am able to log in from each node to the other node by using:
from hp2:
rlogin hp1
rlogin hp1-10

from hp1:
rlogin hp2
rlogin hp2-10

and it connects with no problems and i get no password promtp

Enrico P.
Honored Contributor

Re: cmapplyconf will not work

Hi,
try to check your /etc/hosts and /etc/nsswitch.conf file how this link said for this error:

http://docs.hp.com/en/B9903-90043/ch05s01.html http://docs.hp.com/en/B8325-90051/ch01s06.html

Enrico
nanan
Trusted Contributor

Re: cmapplyconf will not work

rvent!

Could you post the output of hostname on each server and explain to me about what the hp1-10, hp2-10 is.
Hearbeat or other??

In addition, post cluster.ascii

Regards
rvent
Frequent Advisor

Re: cmapplyconf will not work

Yes the hp1-10 and hp2-10 are the hearbeat.

[root@hp1]/etc/cmcluster
# hostname
hp1

[root@hp2]/etc/cmcluster
# hostname
hp2
===================

nsswitch.conf
passwd: files
group: files
#hosts: files dns
hosts: files [NOTFOUND=continue UNAVAIL=continue] dns [NOTFOUND=return UNAVAIL=return]
services: files
networks: files
protocols: files
rpc: files
publickey: files
netgroup: files
automount: files
aliases: files
=====================

hosts
127.0.0.1 localhost loopback
192.168.0.40 hp1

192.168.0.42 hp2
10.0.0.40 hp1-10
10.0.0.42 hp2-10
======================

.rhosts
hp1
hp1-10
hp2
hp2-10
=======================

cmnodelist
hp1 root
hp2 root
hp1-10 root
hp2-10 root
=======================

cmclnodelist
hp1 root
hp2 root
hp1-10 root
hp2-10 root
=======================

cl5.ascii

# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************

# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.

CLUSTER_NAME mycluster


# Cluster Lock Parameters
# The cluster lock is used as a tie-breaker for situations
# in which a running cluster fails, and then two equal-sized
# sub-clusters are both trying to form a new cluster. The
# cluster lock may be configured using only one of the
# following alternatives on a cluster:
# the LVM lock disk
# the quorom server
#
#
# Consider the following when configuring a cluster.
# For a two-node cluster, you must use a cluster lock. For
# a cluster of three or four nodes, a cluster lock is strongly
# recommended. For a cluster of more than four nodes, a
# cluster lock is recommended. If you decide to configure
# a lock for a cluster of more than four nodes, it must be
# a quorum server.

# Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and
# FIRST_CLUSTER_LOCK_PV parameters to define a lock disk.
# The FIRST_CLUSTER_LOCK_VG is the LVM volume group that
# holds the cluster lock. This volume group should not be
# used by any other cluster as a cluster lock device.

# Quorum Server Parameters. Use the QS_HOST, QS_POLLING_INTERVAL,
# and QS_TIMEOUT_EXTENSION parameters to define a quorum server.
# The QS_HOST is the host name or IP address of the system
# that is running the quorum server process. The
# QS_POLLING_INTERVAL (microseconds) is the interval at which
# Serviceguard checks to make sure the quorum server is running.
# The optional QS_TIMEOUT_EXTENSION (microseconds) is used to increase
# the time interval after which the quorum server is marked DOWN.
#
# The default quorum server timeout is calculated from the
# Serviceguard cluster parameters, including NODE_TIMEOUT and
# HEARTBEAT_INTERVAL. If you are experiencing quorum server
# timeouts, you can adjust these parameters, or you can include
# the QS_TIMEOUT_EXTENSION parameter.
#
# The value of QS_TIMEOUT_EXTENSION will directly effect the amount
# of time it takes for cluster reformation in the event of failure.
# For example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster
# reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION
# was set to 0. This delay applies even if there is no delay in
# contacting the Quorum Server. The recommended value for
# QS_TIMEOUT_EXTENSION is 0, which is used as the default
# and the maximum supported value is 30000000 (5 minutes).
#
# For example, to configure a quorum server running on node
# "qshost" with 120 seconds for the QS_POLLING_INTERVAL and to
# add 2 seconds to the system assigned value for the quorum server
# timeout, enter:
#
# QS_HOST qshost
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000

FIRST_CLUSTER_LOCK_VG /dev/vg01


# Definition of nodes in the cluster.
# Repeat node definitions as necessary for additional nodes.
# NODE_NAME is the specified nodename in the cluster.
# It must match the hostname and both cannot contain full domain name.
# Each NETWORK_INTERFACE, if configured with IPv4 address,
# must have ONLY one IPv4 address entry with it which could
# be either HEARTBEAT_IP or STATIONARY_IP.
# Each NETWORK_INTERFACE, if configured with IPv6 address(es)
# can have multiple IPv6 address entries(up to a maximum of 2,
# only one IPv6 address entry belonging to site-local scope
# and only one belonging to global scope) which must be all
# STATIONARY_IP. They cannot be HEARTBEAT_IP.


NODE_NAME hp1
NETWORK_INTERFACE lan0
HEARTBEAT_IP 192.168.0.40
NETWORK_INTERFACE lan2
NETWORK_INTERFACE lan1
STATIONARY_IP 10.0.0.40
FIRST_CLUSTER_LOCK_PV /dev/dsk/c4t8d0
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0

# Possible standby Network Interfaces for lan0: lan2.
# Warning: There are no standby network interfaces for lan1.

NODE_NAME hp2
NETWORK_INTERFACE lan0
HEARTBEAT_IP 192.168.0.42
NETWORK_INTERFACE lan2
NETWORK_INTERFACE lan1
STATIONARY_IP 10.0.0.42
FIRST_CLUSTER_LOCK_PV /dev/dsk/c4t8d0
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0

# Possible standby Network Interfaces for lan0: lan2.
# Warning: There are no standby network interfaces for lan1.


# Cluster Timing Parameters (microseconds).

# The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds).
# This default setting yields the fastest cluster reformations.
# However, the use of the default value increases the potential
# for spurious reformations due to momentary system hangs or
# network load spikes.
# For a significant portion of installations, a setting of
# 5000000 to 8000000 (5 to 8 seconds) is more appropriate.
# The maximum value recommended for NODE_TIMEOUT is 30000000
# (30 seconds).

HEARTBEAT_INTERVAL 1000000
NODE_TIMEOUT 2000000


# Configuration/Reconfiguration Timing Parameters (microseconds).

AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000

# Network Monitor Configuration Parameters.
# The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected.
# If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound
# message count stops increasing or when both inbound and outbound
# message counts stop increasing.
# If set to INOUT, both the inbound and outbound message counts must
# stop increasing before the card is considered down.
NETWORK_FAILURE_DETECTION INOUT

# Package Configuration Parameters.
# Enter the maximum number of packages which will be configured in the cluster.
# You can not add packages beyond this limit.
# This parameter is required.
MAX_CONFIGURED_PACKAGES 8


# Access Control Policy Parameters.
#
# Three entries set the access control policy for the cluster:
# First line must be USER_NAME, second USER_HOST, and third USER_ROLE.
# Enter a value after each.
#
# 1. USER_NAME can either be ANY_USER, or a maximum of
# 8 login names from the /etc/passwd file on user host.
# 2. USER_HOST is where the user can issue Serviceguard commands.
# If using Serviceguard Manager, it is the COM server.
# Choose one of these three values: ANY_SERVICEGUARD_NODE, or
# (any) CLUSTER_MEMBER_NODE, or a specific node. For node,
# use the official hostname from domain name server, and not
# an IP addresses or fully qualified name.
# 3. USER_ROLE must be one of these three values:
# * MONITOR: read-only capabilities for the cluster and packages
# * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages
# in the cluster
# * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative
# commands for the cluster.
#
# Access control policy does not set a role for configuration
# capability. To configure, a user must log on to one of the
# cluster's nodes as root (UID=0). Access control
# policy cannot limit root users' access.
#
# MONITOR and FULL_ADMIN can only be set in the cluster configuration file,
# and they apply to the entire cluster. PACKAGE_ADMIN can be set in the
# cluster or a package configuration file. If set in the cluster
# configuration file, PACKAGE_ADMIN applies to all configured packages.
# If set in a package configuration file, PACKAGE_ADMIN applies to that
# package only.
#
# Conflicting or redundant policies will cause an error while applying
# the configuration, and stop the process. The maximum number of access
# policies that can be configured in the cluster is 200.
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE FULL_ADMIN

USER_NAME root
USER_HOST ANY_SERVICEGUARD_NODE
USER_ROLE full_admin


# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02

VOLUME_GROUP /dev/vg01
nanan
Trusted Contributor

Re: cmapplyconf will not work

rvert! thanks for your posting

the first,
change STATIONARY_IP to HEARTBEAT_IP in cluster ascii

and try again cmapplyconf -v -C cluster.ascii

Regards
nanan
rvent
Frequent Advisor

Re: cmapplyconf will not work

i just tried that and the same error comes up...

nanan
Trusted Contributor

Re: cmapplyconf will not work

Hi
Could you post the inetd.conf of both system
Regards
nanan
rvent
Frequent Advisor

Re: cmapplyconf will not work

hp1
===
## Configured using SAM by root on Fri Dec 16 20:49:22 2005
## Configured using SAM by root on Sat Jan 7 16:16:13 2006
##
#
# @(#)B.11.11_LRinetd.conf $Revision: 1.24.214.3 $ $Date: 97/09/10 14:50:49 $
#
# Inetd reads its configuration information from this file upon execution
# and at some later time if it is reconfigured.
#
# A line in the configuration file has the following fields separated by
# tabs and/or spaces:
#
# service name as in /etc/services
# socket type either "stream" or "dgram"
# protocol as in /etc/protocols
# wait/nowait only applies to datagram sockets, stream
# sockets should specify nowait
# user name of user as whom the server should run
# server program absolute pathname for the server inetd will
# execute
# server program args. arguments server program uses as they normally
# are starting with argv[0] which is the name of
# the server.
#
# See the inetd.conf(4) manual page for more information.
##

##
#
# ARPA/Berkeley services
#
##
ftp stream tcp nowait root /usr/lbin/ftpd ftpd -l
telnet stream tcp nowait root /usr/lbin/telnetd telnetd

# Before uncommenting the "tftp" entry below, please make sure
# that you have a "tftp" user in /etc/passwd. If you don't
# have one, please consult the tftpd(1M) manual entry for
# information about setting up this service.

tftp dgram udp wait root /usr/lbin/tftpd tftpd\
/opt/ignite\
/var/opt/ignite
#bootps dgram udp wait root /usr/lbin/bootpd bootpd
#finger stream tcp nowait bin /usr/lbin/fingerd fingerd
login stream tcp nowait root /usr/lbin/rlogind rlogind
shell stream tcp nowait root /usr/lbin/remshd remshd
exec stream tcp nowait root /usr/lbin/rexecd rexecd
#uucp stream tcp nowait root /usr/sbin/uucpd uucpd
ntalk dgram udp wait root /usr/lbin/ntalkd ntalkd
ident stream tcp wait bin /usr/lbin/identd identd

##
#
# Other HP-UX network services
#
##
printer stream tcp nowait root /usr/sbin/rlpdaemon rlpdaemon -i

##
#
# inetd internal services
#
##
daytime stream tcp nowait root internal
daytime dgram udp nowait root internal
time stream tcp nowait root internal
#time dgram udp nowait root internal
echo stream tcp nowait root internal
echo dgram udp nowait root internal
discard stream tcp nowait root internal
discard dgram udp nowait root internal
chargen stream tcp nowait root internal
chargen dgram udp nowait root internal

##
#
# rpc services, registered by inetd with portmap
# Do not uncomment these unless your system is running portmap!
#
##
# WARNING: The rpc.mountd should now be started from a startup script.
# Please enable the mountd startup script to start rpc.mountd.
##
#rpc stream tcp nowait root /usr/sbin/rpc.rexd 100017 1 rpc.rexd
#rpc dgram udp wait root /usr/lib/netsvc/rstat/rpc.rstatd 100001 2-4 rpc.rstatd
#rpc dgram udp wait root /usr/lib/netsvc/rusers/rpc.rusersd 100002 1-2 rpc.rusersd
#rpc dgram udp wait root /usr/lib/netsvc/rwall/rpc.rwalld 100008 1 rpc.rwalld
#rpc dgram udp wait root /usr/sbin/rpc.rquotad 100011 1 rpc.rquotad
#rpc dgram udp wait root /usr/lib/netsvc/spray/rpc.sprayd 100012 1 rpc.sprayd

##
#
# The standard remshd and rlogind do not include the Kerberized
# code. You must install the InternetSvcSec/INETSVCS-SEC fileset and
# configure Kerberos as described in the SIS(5) man page.
#
##
kshell stream tcp nowait root /usr/lbin/remshd remshd -K
klogin stream tcp nowait root /usr/lbin/rlogind rlogind -K


##
#
# NCPM programs.
# Do not uncomment these unless you are using NCPM.
#
##

#ncpm-pm dgram udp wait root /opt/ncpm/bin/ncpmd ncpmd
#ncpm-hip dgram udp wait root /opt/ncpm/bin/hipd hipd

dtspc stream tcp nowait root /usr/dt/bin/dtspcd /usr/dt/bin/dtspcd
rpc xti tcp swait root /usr/dt/bin/rpc.ttdbserver 100083 1 /usr/dt/bin/rpc.ttdbserver
recserv stream tcp nowait root /usr/lbin/recserv recserv -display :0
registrar stream tcp nowait root /etc/opt/resmon/lbin/registrar /etc/opt/resmon/lbin/registrar
rpc dgram udp wait root /usr/dt/bin/rpc.cmsd 100068 2-5 rpc.cmsd
swat stream tcp nowait.400 root /opt/samba/bin/swat swat
hacl-probe stream tcp nowait root /opt/cmom/lbin/cmomd /opt/cmom/lbin/cmomd -f /var/opt/cmom/cmomd.log -r /var/opt/cmom
hacl-cfg dgram udp wait root /usr/lbin/cmclconfd cmclconfd -p
hacl-cfg stream tcp nowait root /usr/lbin/cmclconfd cmclconfd -c
instl_boots dgram udp wait root /opt/ignite/lbin/instl_bootd instl_bootd


===============================

hp2
============
## Configured using SAM by root on Fri Dec 16 23:06:18 2005
## Configured using SAM by root on Fri Dec 16 23:06:23 2005
##
#
# @(#)B.11.11_LRinetd.conf $Revision: 1.24.214.3 $ $Date: 97/09/10 14:50:49 $
#
# Inetd reads its configuration information from this file upon execution
# and at some later time if it is reconfigured.
#
# A line in the configuration file has the following fields separated by
# tabs and/or spaces:
#
# service name as in /etc/services
# socket type either "stream" or "dgram"
# protocol as in /etc/protocols
# wait/nowait only applies to datagram sockets, stream
# sockets should specify nowait
# user name of user as whom the server should run
# server program absolute pathname for the server inetd will
# execute
# server program args. arguments server program uses as they normally
# are starting with argv[0] which is the name of
# the server.
#
# See the inetd.conf(4) manual page for more information.
##

##
#
# ARPA/Berkeley services
#
##
ftp stream tcp nowait root /usr/lbin/ftpd ftpd -l
telnet stream tcp nowait root /usr/lbin/telnetd telnetd

# Before uncommenting the "tftp" entry below, please make sure
# that you have a "tftp" user in /etc/passwd. If you don't
# have one, please consult the tftpd(1M) manual entry for
# information about setting up this service.

tftp dgram udp wait root /usr/lbin/tftpd tftpd\
/opt/ignite\
/var/opt/ignite
#bootps dgram udp wait root /usr/lbin/bootpd bootpd
#finger stream tcp nowait bin /usr/lbin/fingerd fingerd
login stream tcp nowait root /usr/lbin/rlogind rlogind
shell stream tcp nowait root /usr/lbin/remshd remshd
exec stream tcp nowait root /usr/lbin/rexecd rexecd
#uucp stream tcp nowait root /usr/sbin/uucpd uucpd
ntalk dgram udp wait root /usr/lbin/ntalkd ntalkd
ident stream tcp wait bin /usr/lbin/identd identd

##
#
# Other HP-UX network services
#
##
printer stream tcp nowait root /usr/sbin/rlpdaemon rlpdaemon -i

##
#
# inetd internal services
#
##
daytime stream tcp nowait root internal
daytime dgram udp nowait root internal
time stream tcp nowait root internal
#time dgram udp nowait root internal
echo stream tcp nowait root internal
echo dgram udp nowait root internal
discard stream tcp nowait root internal
discard dgram udp nowait root internal
chargen stream tcp nowait root internal
chargen dgram udp nowait root internal

##
#
# rpc services, registered by inetd with portmap
# Do not uncomment these unless your system is running portmap!
#
##
# WARNING: The rpc.mountd should now be started from a startup script.
# Please enable the mountd startup script to start rpc.mountd.
##
#rpc stream tcp nowait root /usr/sbin/rpc.rexd 100017 1 rpc.rexd
#rpc dgram udp wait root /usr/lib/netsvc/rstat/rpc.rstatd 100001 2-4 rpc.rstatd
#rpc dgram udp wait root /usr/lib/netsvc/rusers/rpc.rusersd 100002 1-2 rpc.rusersd
#rpc dgram udp wait root /usr/lib/netsvc/rwall/rpc.rwalld 100008 1 rpc.rwalld
#rpc dgram udp wait root /usr/sbin/rpc.rquotad 100011 1 rpc.rquotad
#rpc dgram udp wait root /usr/lib/netsvc/spray/rpc.sprayd 100012 1 rpc.sprayd

##
#
# The standard remshd and rlogind do not include the Kerberized
# code. You must install the InternetSvcSec/INETSVCS-SEC fileset and
# configure Kerberos as described in the SIS(5) man page.
#
##
kshell stream tcp nowait root /usr/lbin/remshd remshd -K
klogin stream tcp nowait root /usr/lbin/rlogind rlogind -K


##
#
# NCPM programs.
# Do not uncomment these unless you are using NCPM.
#
##

#ncpm-pm dgram udp wait root /opt/ncpm/bin/ncpmd ncpmd
#ncpm-hip dgram udp wait root /opt/ncpm/bin/hipd hipd

dtspc stream tcp nowait root /usr/dt/bin/dtspcd /usr/dt/bin/dtspcd
rpc xti tcp swait root /usr/dt/bin/rpc.ttdbserver 100083 1 /usr/dt/bin/rpc.ttdbserver
recserv stream tcp nowait root /usr/lbin/recserv recserv -display :0
registrar stream tcp nowait root /etc/opt/resmon/lbin/registrar /etc/opt/resmon/lbin/registrar
rpc dgram udp wait root /usr/dt/bin/rpc.cmsd 100068 2-5 rpc.cmsd
swat stream tcp nowait.400 root /opt/samba/bin/swat swat
hacl-probe stream tcp nowait root /opt/cmom/lbin/cmomd /opt/cmom/lbin/cmomd -f /var/opt/cmom/cmomd.log -r /var/opt/cmom
hacl-cfg dgram udp wait root /usr/lbin/cmclconfd cmclconfd -p
hacl-cfg stream tcp nowait root /usr/lbin/cmclconfd cmclconfd -c
instl_boots dgram udp wait root /opt/ignite/lbin/instl_bootd instl_bootd



Thanks
Bernhard Mueller
Honored Contributor

Re: cmapplyconf will not work

Hi,

check SG 11.16 patches. might need a downtime...

Regards,
Bernhard
Enrico P.
Honored Contributor

Re: cmapplyconf will not work

Hi,
if present, move the binary file /etc/cmcluster/cmclconfig in /etc/cmcluster/cmclconfig.old in either the nodes

Then

vgchange -a n vg01
vgchange -c n vg01


then try to re-apply the cluster.



Enrico
Stephen Doud
Honored Contributor
Solution

Re: cmapplyconf will not work

SG doesn't look at /etc/cmclnodelist.
If /etc/cmcluster/cmclnodelist exists, SG doesn't look at .rhosts.

Have you read this document:
http://docs.hp.com/en/6283/SGsecurityfiles.pdf -- Editing Security Files for Serviceguard
It describes how to set up files to get thru Serviceguard's metal-detector :)

The document is located in the Whitepapers sub-section of the Serviceguard section of the online documents:
http://docs.hp.com/en/ha.html#ServiceGuard
rvent
Frequent Advisor

Re: cmapplyconf will not work

Thanks for all your help...

The problem was caused by a number of things mentinoned here and i was looking at the individual things and not as a group...

IT was related to security and name resolution

Thanks for all your help