- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Issues with starting cluster in Georgraphic Re...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2003 07:32 AM
11-11-2003 07:32 AM
Issues with starting cluster in Georgraphic Redundancy Configuration
I am having issues starting the cluster in a single node cluster configuration i.e. Geogrphic Redundancy. Here is the output of cmviewcl -v on the node that is having issues.
cmviewcl -v
CLUSTER STATUS
vancluster down
NODE STATUS STATE
vanemsc down unknown
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY unknown 0/0/0/0 lan0
STANDBY unknown 0/6/0/0 lan2
UNOWNED_PACKAGES
PACKAGE STATUS STATE PKG_SWITCH NODE
sncPkg down unowned
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover unknown
Failback unknown
Script_Parameters:
ITEM STATUS NODE_NAME NAME
Subnet unknown vanemsc 135.93.27.0
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary down vanemsc
Here is the output of the node that is active.
cmviewcl -v
CLUSTER STATUS
toremcluster up
NODE STATUS STATE
toremsc up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0/0 lan0
STANDBY up 0/2/0/0 lan2
PACKAGE STATUS STATE PKG_SWITCH NODE
sncPkg up running enabled toremsc
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up Unlimited 0 sncMonitor
Subnet up 135.92.27.0
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled toremsc (current)
The showtop command on the node that is having issues is indicates standby and the one that is working is indicated as active ACTIVE.:
Here is the output of the syslog.log file from the node that is having issues. Also, it dumps a core file in the /var/adm/cmluster/.
Nov 11 16:48:22 vanemsc : su : + 0 ems-root
Nov 11 17:59:15 vanemsc : su : + 1 ems-root
Nov 11 18:07:41 vanemsc CM-CMD[11496]: cmruncl
Nov 11 18:07:41 vanemsc cmclconfd[11502]: Executing "/usr/lbin/cmcld" for node vanemsc
Nov 11 18:07:41 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.
Nov 11 18:07:36 vanemsc : su : + 1 ems-root
Nov 11 18:07:41 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads
Nov 11 18:07:42 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.
Nov 11 18:07:42 vanemsc cmcld: Warning. No cluster lock is configured.
Nov 11 18:07:42 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146
Nov 11 18:07:44 vanemsc cmclconfd[11502]: The ServiceGuard daemon, /usr/lbin/cmcld[11503], died upon receiving the signal 6.
Nov 11 18:07:44 vanemsc cmsrvassistd[11507]: Lost connection to the cluster daemon.
Nov 11 18:07:44 vanemsc cmsrvassistd[11509]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer
Nov 11 18:07:44 vanemsc cmsrvassistd[11507]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connec
tion abort
Nov 11 18:07:44 vanemsc cmclconfd[11512]: Unable to lookup any node information in CDB: Connection refused
Nov 11 18:07:44 vanemsc cmlogd: Unable to initialize with ServiceGuard cluster daemon (cmcld): Connection reset by peer
Nov 11 19:12:51 vanemsc : su : + 3 ems-root
Nov 11 19:13:07 vanemsc CM-CMD[29065]: cmhaltcl -v
Nov 11 19:13:25 vanemsc CM-CMD[29070]: cmhaltcl -n vanemsc
Nov 11 19:13:42 vanemsc CM-CMD[29097]: cmhaltcl -f vanemsc
Nov 11 19:13:48 vanemsc CM-CMD[29098]: cmhaltcl
Nov 11 19:18:32 vanemsc : su : + 3 ems-ems
Nov 11 19:23:19 vanemsc : su : + 1 ems-root
Nov 11 19:23:49 vanemsc CM-CMD[2584]: cmhaltpkg -v -n vanemsc sncPkg
Nov 11 19:33:30 vanemsc CM-CMD[5174]: cmhaltnode
Nov 11 19:37:39 vanemsc CM-CMD[6399]: cmrunpkg -n vanemsc
Nov 11 19:37:51 vanemsc CM-CMD[6400]: cmrunpkg -v -n vanemsc sncPkg
Nov 11 19:37:57 vanemsc CM-CMD[6471]: cmruncl
Nov 11 19:37:58 vanemsc cmclconfd[6477]: Executing "/usr/lbin/cmcld" for node vanemsc
Nov 11 19:37:58 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.
Nov 11 19:37:58 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads
Nov 11 19:37:58 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.
Nov 11 19:37:58 vanemsc cmcld: Warning. No cluster lock is configured.
Nov 11 19:37:58 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146
Nov 11 19:38:01 vanemsc cmclconfd[6477]: The ServiceGuard daemon, /usr/lbin/cmcld[6478], died upon receiving the signal 6.
Nov 11 19:38:01 vanemsc cmsrvassistd[6484]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer
Nov 11 19:38:01 vanemsc cmsrvassistd[6482]: Lost connection to the cluster daemon.
Nov 11 19:38:01 vanemsc cmsrvassistd[6482]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connect
ion abort
Nov 11 19:38:01 vanemsc cmlogd: Unable to initialize with ServiceGuard cluster daemon (cmcld): Connection reset by peer
Nov 11 19:41:04 vanemsc : su : + 1 ems-ems
Nov 11 19:56:56 vanemsc : su : + 1 ems-root
Please advise me as to what the cause of the problem could be.
Thanks.
NBA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2003 10:52 AM
11-11-2003 10:52 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
cmcheckconf -v -C /etc/cmcluster/cluster.ascii
Please attach this file:
/etc/cmcluster/cluster.ascii
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2003 11:58 AM
11-11-2003 11:58 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
"single node cluster configuration i.e. Geogrphic Redundancy."
I'm sorry - to me, these are mutually exclusive terms.
Please explain.
Rgds,
Jeff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2003 12:08 PM
11-11-2003 12:08 PM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2003 07:13 PM
11-11-2003 07:13 PM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Anyway, what version of SG are you using and what patch level for Sg do you hav einstalled?
do what /usr/lbin/cmcld to get this.
What happens when you do a cmquerycl on this node?
cmquerycl -n vanemsc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 01:04 AM
11-12-2003 01:04 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
The first error that was encountered in the syslog.log file was this:
cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146
This issue has been addressed in patches for Serviceguard versions A.11.13 and A.11.14
Use 'what /usr/lbin/cmcld | grep Date" to determine the version and patch level of Serviceguard loaded.
If SG is not patched, consider loading one:
PHSS_28849 :A.11.13
PHSS_29915 :A.11.14
-sd
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 01:45 AM
11-12-2003 01:45 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Thanks for the swift response. Here is the output of the cmcheckconf -v -C /etc/cmcluster/sncCluster.ascii
cmcheckconf -v -C /etc/cmcluster/sncCluster.ascii
Checking cluster file: /etc/cmcluster/sncCluster.ascii
Checking nodes ... Done
Checking existing configuration ...
Done
Gathering configuration information ........... Done
Warning: The disk at /dev/dsk/c0t1d0 on node vanemsc does not have an ID, or a disk label.
Warning: Disks which do not have IDs cannot be included in the topology description.
Use pvcreate(1m) to initialize disks for use with LVM, or
use vxdiskadm(1m) to initalize disks for use with VxVM.
Cluster vancluster is an existing cluster
Checking for inconsistencies .. Done
Cluster vancluster is an existing cluster
Maximum configured packages parameter is 8.
Configuring 1 package(s).
7 package(s) can be added to this cluster.
Modifying configuration on node vanemsc
Modifying the cluster configuration for cluster vancluster.
Validating update for /cluster - value information is identical.
Modifying node vanemsc in cluster vancluster.
Verification completed with no errors found.
Use the cmapplyconf command to apply the configuration.
Here is the output of the what /usr/lbin/cmcld | grep Date
A.11.09 Date: 05/16/2001; PATCH: PHSS_24033.
The other facts I would like to bring to your attention is when I give a command er_status on both the nodes, it indicates "DOWN".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 02:27 AM
11-12-2003 02:27 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Warning: The disk at /dev/dsk/c0t1d0 on node vanemsc does not have an ID, or a disk label.
Warning: Disks which do not have IDs cannot be included in the topology description.
Use pvcreate(1m) to initialize disks for use with LVM, or
use vxdiskadm(1m) to initalize disks for use with VxVM.
############################################
LVM or VXVM has to be completed without error before any ServiceGuard work can be accomplished. So go back to either and start your SG afterwards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 02:46 AM
11-12-2003 02:46 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
The disk /dev/dsk/c0t1d0 is a HP DVD-ROM.
regards,
NBA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 02:55 AM
11-12-2003 02:55 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
If as you say you have 11.09, this is 7 patches out of date, you should obtain PHSS_27158 and install that, then try again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 02:56 AM
11-12-2003 02:56 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
If so then apply the cluster bin file and start the cluster without starting the packages.
######################################
Here's how to sync up the alt. node with the pri. node's LVM info.
pri node:
copy down minor number of vg on pri. node
ll /dev/vg##/group
vgchange -a n /dev/vg##
vgexport -s -m /tmp/lvm_map /dev/vg##
ftp /tmp/lvm_map over to alt node
alt node:
mkdir /dev/vg##
mknod /dev/vg##/group c 64 0x0#0000
vgimport -s -m /tmp/lvm_map /dev/vg##
##########################################
Done - cmquerycl -n nodea -n nodeb -C cluster.ascii (* Right? *)
Done - cmcheckconf -C /etc/cmcluster/cluster.ascii (* Right? *)
##########################################
Then its just this:
cmapplyconf -C /etc/cmcluster/cluster.ascii
cmruncl
cmviewcl -v
Verify the cluster is up but not the packages.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 03:25 AM
11-12-2003 03:25 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Do I really have to install the patch PHSS_27158. The reason why I am asking is the working toremsc is also having the same version and patch level and doesn't report any problem with starting the cluster on it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 03:34 AM
11-12-2003 03:34 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 04:28 AM
11-12-2003 04:28 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Sorry for any misunderstanding.
The toremsc reports the following information for the "what /usr/lbin/cmcld | grep Date
A.11.09 Date: 05/16/2001; PATCH : PHSS_24033
similar to the version and patch level on the vanemsc which is having cluster issuesie.
A.11.09 Date: 05/16/2001; PATCH : PHSS_24033.
My question is do I really have to install the patch PHSS 27158 when the toremsc is working fine without the PHSS 27158?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 05:33 AM
11-12-2003 05:33 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Are you using OPS?
http://www1.itrc.hp.com/service/patch/patchDetail.do?BC=patch.breadcrumb.pdb|patch.breadcrumb.search|&patchid=PHSS_27158&context=hpux:800:11:00
#########################################
Where you able to bring the cluster up without the packages?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 05:34 AM
11-12-2003 05:34 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
If you read the text file for this patch PHSS_27158 you will see:
At cmcld start up, i.e. cmrunnode or cmruncl, syslog shows this message,
"cmcld: Assertion failed: pnet != NULL, file:comm_link.c, line: 140."
cmcld immediately aborts and dumps core.
I think this is youir problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2003 06:06 AM
11-12-2003 06:06 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
I am not using OPS edition, because the system is based on Informix database.
Melvyn,
I have to get a clearance from the customer, before I could install the patch PHS 27158 on the system. Meanwhile, I am open to any other bright ideas.
regards,
NBA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 03:56 AM
11-13-2003 03:56 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
I installed the patch PHSS_27158 on the vanemsc server. Somehow, the system reports the same error as I saw with the patch PHSS_24033.
Here is the output of the what /usr/lbin/cmcld command.
vanemsc:what /usr/lbin/cmcld | grep Date
A.11.09 Date: 01/15/2003; PATCH: PHSS_27158
Here is the excerpt from the syslog.log file of the vanemsc server.
Nov 13 16:33:57 vanemsc CM-CMD[23180]: cmruncl -v
Nov 13 16:33:58 vanemsc cmclconfd[23205]: Executing "/usr/lbin/cmcld" for node vanemsc
Nov 13 16:33:58 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.
Nov 13 16:27:04 vanemsc : su : + 2 ems-ems
Nov 13 16:33:58 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads
Nov 13 16:33:58 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.
Nov 13 16:33:58 vanemsc cmcld: Warning. No cluster lock is configured.
Nov 13 16:33:58 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146
Nov 13 16:34:02 vanemsc cmlogd: Unable to initialize with ServiceGuard cluster daemon (cmcld): Connection reset by peer
Nov 13 16:34:02 vanemsc cmsrvassistd[23244]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer
Nov 13 16:34:02 vanemsc cmsrvassistd[23228]: Lost connection to the cluster daemon.
Nov 13 16:34:02 vanemsc cmsrvassistd[23228]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connection abort
Nov 13 16:34:02 vanemsc cmclconfd[23351]: Unable to lookup any node information in CDB: Connection refused
Nov 13 16:34:02 vanemsc cmclconfd[23205]: The ServiceGuard daemon, /usr/lbin/cmcld[23206], died upon receiving the signal 6.
Nov 13 16:34:29 vanemsc CM-CMD[25026]: cmrunnode vanemsc
Nov 13 16:34:29 vanemsc cmclconfd[25031]: Executing "/usr/lbin/cmcld" for node vanemsc
Nov 13 16:34:29 vanemsc cmcld: Daemon Initialization - Maximum number of packages supported for this incarnation is 8.
Nov 13 16:34:29 vanemsc cmcld: Reserving 2048 Kbytes of memory and 64 threads
Nov 13 16:34:30 vanemsc cmcld: The maximum # of concurrent local connections to the daemon that will be supported is 22.
Nov 13 16:34:30 vanemsc cmcld: Warning. No cluster lock is configured.
Nov 13 16:34:30 vanemsc cmcld: Assertion failed: pnet != NULL, file: comm_link.c, line: 146
Nov 13 16:34:32 vanemsc cmsrvassistd[25037]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer
Nov 13 16:34:32 vanemsc cmsrvassistd[25038]: Unable to notify ServiceGuard main daemon (cmcld): Connection reset by peer
Nov 13 16:34:32 vanemsc cmsrvassistd[25036]: Lost connection to the cluster daemon.
Nov 13 16:34:32 vanemsc cmsrvassistd[25036]: Lost connection with ServiceGuard cluster daemon (cmcld): Software caused connection abort
Nov 13 16:34:32 vanemsc cmclconfd[25031]: The ServiceGuard daemon, /usr/lbin/cmcld[25032], died upon receiving the signal 6.
Nov 13 16:35:00 vanemsc CM-CMD[25090]: cmmodpkg -e sncPkg
Nov 13 16:35:00 vanemsc CM-CMD[25110]: cmmodpkg -e -n vanemsc sncPkg
I will continue my investigation. Meanwhile, if you have any ideas, please let me know.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 04:59 AM
11-13-2003 04:59 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
I would also assume corrupted cluster binaries, work to get one node up and copy the data over to the other.
Try this command from :
http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0xb02a4b3ef09fd611abdb0090277a778c%2C00.html&admit=716493758+1068745972259+28353475
#########################################
# cmquerycl -C config.ascii -n lr006b04 -n lr006b05
This will NOT list any VGs that are already "clustered" but it will tell you if the nodes have concurrent versions of serviceguard, whether the nodes can communicate via the hacl ports (/etc/services), whether the security files (~/.rhosts or /etc/cmcluster/cmclnodelist) allows the communication etc.
If this command does not work - at least one fundamental system configuration issue exists which prevents ServiceGuard from operating properly in the present state.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 05:36 AM
11-13-2003 05:36 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
At the moment, the toremsc is the only server that is monitoring the network without any problems. It will be difficult to convince the customer to bring down the package and cluster to install the patch PHSS_27158. Therefore, I will continue to troubleshoot the vanemsc.
Here is the output of the command from the vanemsc
cmquerycl -C config.ascii -n vanemsc -n toremsc
Warning: The disk at /dev/dsk/c0t1d0 on node vanemsc does not have an ID, or a disk label.
Warning: The disk at /dev/dsk/c0t2d0 on node toremsc does not have an ID, or a disk label.
Warning: Disks which do not have IDs cannot be included in the topology description.
Use pvcreate(1m) to initialize disks for use with LVM, or
use vxdiskadm(1m) to initalize disks for use with VxVM.
Warning: Network interface lan3 on node toremsc couldn't talk to itself.
Warning: Network interface lan4 on node toremsc couldn't talk to itself.
Node Names: toremsc
vanemsc
Bridged networks:
1 lan0 (toremsc)
lan2 (toremsc)
2 lan1 (toremsc)
3 lan0 (vanemsc)
4 lan1 (vanemsc)
lan2 (vanemsc)
IP subnets:
135.93.27.0 lan0 (vanemsc)
64.251.200.0 lan1 (vanemsc)
135.92.27.0 lan0 (toremsc)
lan1 (toremsc)
Possible Heartbeat IPs:
Possible Cluster Lock Devices:
LVM volume groups:
/dev/vg00 vanemsc
/dev/vg01 vanemsc
/dev/vg02 vanemsc
/dev/vg03 vanemsc
/dev/vg00 toremsc
/dev/vg01 toremsc
/dev/vg02 toremsc
/dev/vg03 toremsc
Warning: No possible heartbeat networks found.
All nodes must be connected to at least one common network.
This may be due to DLPI not being installed.
Warning: Failed to find a configuration that satisfies the minimum network configuration requirements.
Minimum network configuration requirements are:
- 2 or more heartbeat networks OR
- 1 heartbeat network with local switch OR
- 1 heartbeat network with serial line.
NOTE: Please ignore the warnings on disk c0t1d0 and c0t2d0 as these are DVD ROM drives.
Also, since these servers are configured on Geographic Redundancy, they do not have hearbeat LAN configured, please ignore the Warning: Failed to find a configuration that satisfies the minimum network configuration requirements.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 05:50 AM
11-13-2003 05:50 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
LAN's are qualified by MAC address during the 'cmquerycl' creation of the cluster binary.
And 'cmquerycl' is NOT succeeding.
a) So you've got some patching issue at least.
b) and, you've got some LAN issues.
Try this to test for layer two connectivity between nodes. Its the same command used by cmquerycl.
# linkloop MAC (* where MAC is the other node or switch or any other network NIC. *)
Get your MAC from 'arp -a' and 'lanscan'.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 06:05 AM
11-13-2003 06:05 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Here is the output of linkloop on vanemsc
# lanscan
Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI
Path Address In# State NamePPA ID Type Support Mjr#
0/0/0/0 0x00306E0C7C42 0 UP lan0 snap0 1 ETHER Yes 119
0/3/0/0 0x00306E06A225 1 UP lan1 snap1 2 ETHER Yes 119
0/6/0/0 0x00306E065214 2 UP lan2 snap2 3 ETHER Yes 119
# linkloop 0x00306e0c7c42
Link connectivity to LAN station: 0x00306e0c7c42
-- OK
linkloop 0x00306e06a225
Link connectivity to LAN station: 0x00306e06a225
error: get_msg2 getmsg failed, errno = 4
-- FAILED
frames sent : 1
frames received correctly : 0
reads that timed out : 1
linkloop 0x00306e065214
Link connectivity to LAN station: 0x00306e065214
error: get_msg2 getmsg failed, errno = 4
-- FAILED
frames sent : 1
frames received correctly : 0
reads that timed out : 1
The lan0 (0x00306e0c7c42) which is working is the one that is responsbile for communication with the other node(toremsc) and it works fine. However the lan1 is responsbile to communicate OSI over TCP/IP with the network elements and the lan2 is the rendundant LAN for LAN0 (0x00306e0c7c42). I am not sure why the other two LAN's i.e. lan1 and lan2 have linkloop isssues. Any ideas?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 07:38 AM
11-13-2003 07:38 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
You have to lanscan from nodeb, write down node b's MAC's, and linkloop from NODE A.
Why is your heartbeat LAN down?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 07:43 AM
11-13-2003 07:43 AM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
-USA..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2003 11:30 PM
11-13-2003 11:30 PM
Re: Issues with starting cluster in Georgraphic Redundancy Configuration
Are you trying to have two separate one node clusters?
Or are you trying to add a node into a running cluster?
If the first option, then you need to possibly re-apply the cluster binary to eliminate the CDB errors, that may be "cached".
If the second option, then there appears to be some major networking connectivity isues that you need ot fix first.
Failing that, I suggest it may be time to log a call wiht your local HP Response Centre.