- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: HP-UX 10.2 - Service Guard - Setting up single...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-30-2011 11:02 AM
11-30-2011 11:02 AM
HP-UX 10.2 - Service Guard - Setting up single node configuration?
I have two (2) HP9000K series HP-UX 10.2 servers running Service Guard clustering. One server crashed, and is going to be down awhile until we can order replacement parts. When it failed, it should have failed over the Service Guard package to the stanby server. That did not appear to go smoothly.
This is what I see on the (now) active server:
msphnyc1ROOT:cmviewcl -v CLUSTER STATUS msphync up NODE STATUS STATE msphnyc0 down failed Network_Parameters: INTERFACE STATUS PATH NAME PRIMARY unknown 8/8/2/0 lan2 PRIMARY unknown 10/12/6 lan0 PRIMARY unknown 8/12/1/0 lan3 STANDBY unknown 8/12/2/0 lan4 STANDBY unknown 8/8/1/0 lan1 NODE STATUS STATE msphnyc1 up running Network_Parameters: INTERFACE STATUS PATH NAME PRIMARY up 8/8/2/0 lan2 STANDBY down 8/12/2/0 lan4 PRIMARY up 10/12/6 lan0 PRIMARY up 8/12/1/0 lan3 STANDBY up 8/8/1/0 lan1 PACKAGE STATUS STATE PKG_SWITCH NODE msphp01 up running enabled msphnyc1 Script_Parameters: ITEM STATUS MAX_RESTARTS RESTARTS NAME Service up 0 0 cluster_running Subnet up 192.50.1.32 Subnet up 192.50.2.32 Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING NAME Primary down msphnyc0 Alternate up enabled msphnyc1 (current)
The package (our application) looks correct, but it is not working properly. I'm wondering if somehow the crashed server is causing this application to not work correctly?
- Tags:
- cmviewcl
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-30-2011 10:56 PM
11-30-2011 10:56 PM
Re: HP-UX 10.2 - Service Guard - Setting up single node configuration?
You say failover not worked ?
when you look SG application in the inactive server (that it should have been be newly active node now) , what do you see ?
cmviewcl -v ??
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-01-2011 01:23 AM
12-01-2011 01:23 AM
Re: HP-UX 10.2 - Service Guard - Setting up single node configuration?
> I'm wondering if somehow the crashed server is causing this application to not work correctly?
That would mean the application depends on something that was not transferred to the standby server along with the package disk(s) and IP address(es). It should have been configured to not do that, or the requirement should have been recognized and worked around in the testing phase before the cluster went into production state - the very reason for clustering is that the other node might not be there someday.
If the dependency was there all the time, this would have been, by definition, a failure in packaging the application for the Serviceguard cluster. If the dependency was created later, it might be some change that was applied to the main server only, without keeping the standby server in an identical configuration.
Was this application ever failed over successfully?
You should look at the package log and any application logs (if there are any) to find error messages or some other clues about what causes the application to fail. "Does not work correctly" is not an useful problem description.