Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-21-2010 11:33 PM
03-21-2010 11:33 PM
SG
I have two package A & B configured in a two node cluster.
When I reboot the system the cluster comes up and package A also comes up, however the other package B doesn't come up and also shows failure message in the rc.log to activate the VGs those are part of package B.
I can understand these message are because I havn't configured the the /etc/lvmrc such that only the VG that doesn't belong to Cluster should come up while system boot, however this should not stop the package B to come up.
I have checked the package and cluster configuration file using cmcheckconf and AUTO_RUN is alo enable for package B and should come up automatically.
Also I have set the rc script for cluster such that it start up before the NFS demons while booting and have some of my FSs that belongs to the package (that is not starting while boot) exported to other server and this cause me to:
1) Run the package explicitly
2) Once the package is up and running I need to remount the FS on the system they are exported to, to make the data visible in those FSs.
Kindly suggest what needs to be done in scuh case.
Thanks in advance.
Cheers
AmitJ.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-22-2010 12:17 AM
03-22-2010 12:17 AM
Re: SG
With Serviceguard A.11.17 and older, the package log of packageB would be typically at /etc/cmcluster/packageB/packageB.cntl.log.
With Serviceguard A.11.18 and newer, the package log location is configurable. If the log is not at the standard location (see above), check the script_log_file parameter in your package configuration file.
Setting the cluster to start up before the NFS daemons does not guarantee anything at all about package start-up: the cluster startup script will wait until the cluster has been formed, but after that, the package startup is something Serviceguard does independently and asynchronously. It is very possible that the NFS daemons will complete their start-up before your package B is up.
From your description, it sounds like you've added to /etc/exports some references to package B's disks. This is not the right way to do it: it can cause the exact problems you're describing.
HP has a special HA-NFS toolkit for Serviceguard. Unfortunately, it is not available for free. The product number of the current version of the NFS toolkit is B5140BA.
Go to http://docs.hp.com/en/ha.html#Highly%20Available%20NFS and find the appropriate documents for your HP-UX version in the "Highly Available NFS" section. Read the first chapter of "Managing MC/Serviceguard NFS", titled "Overview of MC/Serviceguard NFS". The sub-chapter "How the Control and Monitor Scripts Work" describes the optimal set-up of NFS with Serviceguard.
If you cannot purchase the HA-NFS toolkit, the documentation might give you some idea about all the things you must take into account in the setup of a package that contains NFS-exported filesystems.
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-22-2010 12:21 AM
03-22-2010 12:21 AM
Re: SG
>also shows failure message in the rc.log to activate the VGs those are part of package B.
>I can understand these message are because I havn't configured the the
You should look at the /var/adm/syslog/syslog.conf and at the package logs (/etc/cmcluster/
There are some helpful information about your problems.
Best regards,
Horia.
Horia.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-22-2010 02:23 AM
03-22-2010 02:23 AM
Re: SG
I am aware of the HA NFS however as of now I wanted to investigate the present scenario as its successsful (if not correct way) for Package A.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-22-2010 03:10 AM
03-22-2010 03:10 AM
Re: SG
Horia.
Horia.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2010 03:14 AM
03-23-2010 03:14 AM
Re: SG
How does rebooting one system start a cluster unless this is a one-node cluster?
If you are not clear on the actual problem, it is always helpful if log messages are included in a description of the problem. For instance, what does /var/adm/syslog/syslog.log contain at the time of the supposed cluster formation and package startup?
Are there any messages with the same timestamp in the package log files?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-23-2010 08:08 PM
03-23-2010 08:08 PM
Re: SG
Your problem lies, in following sentence
"
When I reboot the system the cluster comes up and package A also comes up, however the other package B doesn't come up and also shows failure message in the rc.log to activate the VGs those are part of package B."
It seems that the Vg's blong to Package B are not in cluster mode. Set those vg's in exclusive mode, hope fully it will work.
vgchange -a e vg02 (example)
Shardha