HPE GreenLake Administration
Operating System - HP-UX
1826677
Members
2717
Online
109696
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-03-2003 01:21 AM
11-03-2003 01:21 AM
server
Dear all
I have one HP UNIX server rp8400 i already partioned into two hardware (npar) hpnodes hpnode1 and hpnode2 and i made the clustering between them and i defined the package my question how could i test that the clustering is working well in case of one of the nodes is failing over or the ethernet card of that node is down?
2.If i applied apache server over these two nodes how i can test the clustering?
3. if i defined logical ip address over the ethernet cards which are related to those two hpnodes also the apache server is applied there how i can test the ping command incase one of ethrenet is down over one of these nodes or in case of any failover?
Thank you all
I have one HP UNIX server rp8400 i already partioned into two hardware (npar) hpnodes hpnode1 and hpnode2 and i made the clustering between them and i defined the package my question how could i test that the clustering is working well in case of one of the nodes is failing over or the ethernet card of that node is down?
2.If i applied apache server over these two nodes how i can test the clustering?
3. if i defined logical ip address over the ethernet cards which are related to those two hpnodes also the apache server is applied there how i can test the ping command incase one of ethrenet is down over one of these nodes or in case of any failover?
Thank you all
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-03-2003 02:03 AM
11-03-2003 02:03 AM
Re: server
1. Pull out the network cable from the primary lan cards and see if the lan fails to standby by pinging the IP from another host . First use cmhaltpkg to halt a package and use cmrunpkg to ru it on the alternate node . Then finally you can halt one npar and see if all the packages failover to the other npar .
2. Same way as described above if apache is in a package. or you can run individual instances of apache on both the nodes therby reducng the number of packages .
3. Ping the logcal IP address form anywhere n your network , before failing . Then failover and ping it again . If pingable , then it has failed over correctly . YOu can laso try telnet .
2. Same way as described above if apache is in a package. or you can run individual instances of apache on both the nodes therby reducng the number of packages .
3. Ping the logcal IP address form anywhere n your network , before failing . Then failover and ping it again . If pingable , then it has failed over correctly . YOu can laso try telnet .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2003 01:40 AM
11-04-2003 01:40 AM
Re: server
1. To get a more complete look at your test, log onto one of the systems. First run "cmviewcl -v" on the system you are logged into. On the other system, unplug the primary NIC. Then run "cmviewcl -v" again. This SHOULD show that the IP has moved onto the alternate NIC.
2. If Apache has been installed on the package and not on one of the nodes, it should fail over from node to node. You should also incoroprate the proper Apache start and stop commands into the package.cntl
If the package fails from server to server, the processes should be started on the new server.
3. Use the "cmviewcl -v" command to verify that the package has failed over. if the PACKAGE IP answers your ping the test is a success.
P.S. I would be very hesitant in implementing a Service Guard cluster on nPars in a single server. Althnough it can be done, it does seriously limit the High avaialability capabilities of MC/ServiceGuard. If for any reason the server itself should go dark, then the entire cluster is dark, and you have no failover. You want to remove as many Single Points of Failure(SPOF) as possible.
Joseph M. Short
Senior Technical Consultant
AdvizeX Technologies
2. If Apache has been installed on the package and not on one of the nodes, it should fail over from node to node. You should also incoroprate the proper Apache start and stop commands into the package.cntl
If the package fails from server to server, the processes should be started on the new server.
3. Use the "cmviewcl -v" command to verify that the package has failed over. if the PACKAGE IP answers your ping the test is a success.
P.S. I would be very hesitant in implementing a Service Guard cluster on nPars in a single server. Althnough it can be done, it does seriously limit the High avaialability capabilities of MC/ServiceGuard. If for any reason the server itself should go dark, then the entire cluster is dark, and you have no failover. You want to remove as many Single Points of Failure(SPOF) as possible.
Joseph M. Short
Senior Technical Consultant
AdvizeX Technologies
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP