- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Not able to start second package on second nod...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2006 11:01 PM
10-15-2006 11:01 PM
Not able to start second package on second node
I have two rp3440 and OS as HPUX 11.11 and MCSG A.11.16.0. My requirement is to have one package on each node. I have configured cluster and able to start one package on node1 but not able to start second package on node2, following is the error
Oct 16 14:29:30 - Node "scom02": Mounting /dev/vg04/lvol1 at
mount: /dev/vg04/lvol1 was either ignored or not found in /etc/fstab
ERROR: Function check_and_mount
ERROR: Failed to mount /dev/vg04/lvol1
Oct 16 14:29:30 - Node "scom02": Deactivating volume group vg04
Deactivated volume group in Exclusive Mode.
Volume group "vg04" has been successfully changed.
Oct 16 14:29:30 - Node "scom02": Deactivating volume group vg05
Deactivated volume group in Exclusive Mode.
Volume group "vg05" has been successfully changed.
########### Node "scom02": Package start failed at Mon Oct 16 14:29:30 oman 2006
I manually mounted same thing and found working fine. And one more thing while executing cmquerycl I get one warning message but it generates ascii file
"Warning: Detected node scom03 on revision A.11.16.00 that does not support newer network discovery. Will revert to older network discovery."
For this problem I reconfigured cluster but same problem.
Pl. let me know if there is any solution for this issue.
regds,
Mahadev
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2006 11:14 PM
10-15-2006 11:14 PM
Re: Not able to start second package on second node
In node1, verify the control.sh.log. Verify the umount function and verify that umount /dev/vg04/lvol1 and deactivate volume group..
rgs,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2006 11:43 PM
10-15-2006 11:43 PM
Re: Not able to start second package on second node
Oct 16 14:29:30 - Node "scom02": Mounting /dev/vg04/lvol1 at
mount: /dev/vg04/lvol1 was either ignored or not found in /etc/fstab
appears to indicate that the package control script was attempting to mount the file system in /dev/vg01/lvol1 - to the SAME DIRECTORY and special file!
You should see something like this:
Mounting /dev/vg04/lvol1 at
mount: /
Check the package control script line that links the /dev/vg04/lvol1 to it's mount point. You probably used /dev/vg04/lvol1 as the mount point in the script.
As for the "does not support newer network discovery" message, compare /etc/inetd.conf to another server in a cluster that works. I suspect a critical difference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2006 12:10 AM
10-16-2006 12:10 AM
Re: Not able to start second package on second node
After mailing my question to forrum I modified package configuration by adding cluster lock disk in failed package and after that second package started successfully.
The /etc/inetd.conf file is similar in both the nodes, pl. let me know what is the critical fix ?
regds,
Mahadev
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2006 01:52 AM
10-16-2006 01:52 AM
Re: Not able to start second package on second node
hosts: files dns
ipnodes: files
Then insure all NICs that are assigned IPs on each server are accounted for in /etc/hosts
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2006 04:43 PM
10-16-2006 04:43 PM
Re: Not able to start second package on second node
Presently following entry is there in /etc/nsswitch.conf
# cat nsswitch.conf
hosts: files [NOTFOUND=continue UNAVAIL=continue] dns [NOTFOUND=return UNAVAIL=return]
Is this is OK or the entry suggested by you is required ?
regds,
Mahadev
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2006 11:26 PM
10-16-2006 11:26 PM
Re: Not able to start second package on second node
For the message:
Oct 16 14:29:30 - Node "scom02": Mounting /dev/vg04/lvol1 at
mount: /dev/vg04/lvol1 was either ignored or not found in /etc/fstab
1. Check the /etc/cmcluster/
2. Check if the MOUNT_POINT exists and try to manually mount the filesystem:
# vgchange -a e
# mount /dev/
(Requires the node be running the SG daemons)
It's likely that you didn't create the mount directory on the 2nd node.
As for the warning message,
"Warning: Detected node scom03 on revision A.11.16.00 that does not support newer network discovery. Will revert to older network discovery."
It is a warning, which means that it won't prevent Serviceguard from operating.
Where did the cluster binary (/etc/cmcluster/cmclconfig) come from? Was it the result of a cmapplyconf, or was it copied from another server or cluster?
Also, install the SAME Serviceguard patch on both servers!
Latest versions of the Serviceguard patch...
Serviceguard 11.15 patch:
PHSS_34505 for 11.11
PHSS_34506 for 11.23
Serviceguard 11.16 patch:
PHSS_34759 for 11.11
PHSS_34760 for 11.23
Serviceguard 11.17 patch:
PHSS_33838 for 11.23