Operating System - HP-UX
1846539 Members
2366 Online
110256 Solutions
New Discussion

Re: Not able to start second package on second node

 
Anoopkumar
Frequent Advisor

Not able to start second package on second node

Hi,
I have two rp3440 and OS as HPUX 11.11 and MCSG A.11.16.0. My requirement is to have one package on each node. I have configured cluster and able to start one package on node1 but not able to start second package on node2, following is the error
Oct 16 14:29:30 - Node "scom02": Mounting /dev/vg04/lvol1 at
mount: /dev/vg04/lvol1 was either ignored or not found in /etc/fstab
ERROR: Function check_and_mount
ERROR: Failed to mount /dev/vg04/lvol1
Oct 16 14:29:30 - Node "scom02": Deactivating volume group vg04
Deactivated volume group in Exclusive Mode.
Volume group "vg04" has been successfully changed.
Oct 16 14:29:30 - Node "scom02": Deactivating volume group vg05
Deactivated volume group in Exclusive Mode.
Volume group "vg05" has been successfully changed.

########### Node "scom02": Package start failed at Mon Oct 16 14:29:30 oman 2006
I manually mounted same thing and found working fine. And one more thing while executing cmquerycl I get one warning message but it generates ascii file
"Warning: Detected node scom03 on revision A.11.16.00 that does not support newer network discovery. Will revert to older network discovery."
For this problem I reconfigured cluster but same problem.
Pl. let me know if there is any solution for this issue.

regds,
Mahadev
6 REPLIES 6
rariasn
Honored Contributor

Re: Not able to start second package on second node

Hi,
In node1, verify the control.sh.log. Verify the umount function and verify that umount /dev/vg04/lvol1 and deactivate volume group..
rgs,


Stephen Doud
Honored Contributor

Re: Not able to start second package on second node

The message:

Oct 16 14:29:30 - Node "scom02": Mounting /dev/vg04/lvol1 at
mount: /dev/vg04/lvol1 was either ignored or not found in /etc/fstab

appears to indicate that the package control script was attempting to mount the file system in /dev/vg01/lvol1 - to the SAME DIRECTORY and special file!

You should see something like this:
Mounting /dev/vg04/lvol1 at
mount: /

Check the package control script line that links the /dev/vg04/lvol1 to it's mount point. You probably used /dev/vg04/lvol1 as the mount point in the script.


As for the "does not support newer network discovery" message, compare /etc/inetd.conf to another server in a cluster that works. I suspect a critical difference.
Anoopkumar
Frequent Advisor

Re: Not able to start second package on second node

Hi Stephen,
After mailing my question to forrum I modified package configuration by adding cluster lock disk in failed package and after that second package started successfully.
The /etc/inetd.conf file is similar in both the nodes, pl. let me know what is the critical fix ?

regds,
Mahadev
Stephen Doud
Honored Contributor

Re: Not able to start second package on second node

Insure the following lines are set thus in /etc/nsswitch.conf:

hosts: files dns
ipnodes: files

Then insure all NICs that are assigned IPs on each server are accounted for in /etc/hosts
Anoopkumar
Frequent Advisor

Re: Not able to start second package on second node

Hi Stephen,
Presently following entry is there in /etc/nsswitch.conf
# cat nsswitch.conf
hosts: files [NOTFOUND=continue UNAVAIL=continue] dns [NOTFOUND=return UNAVAIL=return]
Is this is OK or the entry suggested by you is required ?

regds,
Mahadev
Stephen Doud
Honored Contributor

Re: Not able to start second package on second node

/etc/nsswitch.conf looks good for the 'hosts' line.


For the message:
Oct 16 14:29:30 - Node "scom02": Mounting /dev/vg04/lvol1 at
mount: /dev/vg04/lvol1 was either ignored or not found in /etc/fstab

1. Check the /etc/cmcluster//pkg.cntl file for any syntax errors.

2. Check if the MOUNT_POINT exists and try to manually mount the filesystem:

# vgchange -a e
# mount /dev// /MOUNT_POINT
(Requires the node be running the SG daemons)

It's likely that you didn't create the mount directory on the 2nd node.


As for the warning message,
"Warning: Detected node scom03 on revision A.11.16.00 that does not support newer network discovery. Will revert to older network discovery."
It is a warning, which means that it won't prevent Serviceguard from operating.

Where did the cluster binary (/etc/cmcluster/cmclconfig) come from? Was it the result of a cmapplyconf, or was it copied from another server or cluster?

Also, install the SAME Serviceguard patch on both servers!

Latest versions of the Serviceguard patch...
Serviceguard 11.15 patch:
PHSS_34505 for 11.11
PHSS_34506 for 11.23

Serviceguard 11.16 patch:
PHSS_34759 for 11.11
PHSS_34760 for 11.23

Serviceguard 11.17 patch:
PHSS_33838 for 11.23