HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Configure Cluster --load balancing.
Operating System - HP-UX
1838885
Members
2813
Online
110131
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2004 01:47 PM
11-12-2004 01:47 PM
Configure Cluster --load balancing.
Now i have 4 node (Just called: node1,node2,node3,node4) Cluster. 2 package. both of these package is on node1 and others node are stanby. In this model node1 is exhausted. Now i want to run 2package running on 4 node(or 2 node) concurrently.
Is there any way to balancing load on 4 node?
Is there any way to balancing load on 4 node?
HP is simple
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2004 01:54 PM
11-12-2004 01:54 PM
Re: Configure Cluster --load balancing.
There is no way that I know of to have a MC/SG package running on multiple nodes simultaneously.
If possible you would be better to have package1 running on node1 and package2 running on node2 with node3 a standby for node1 and node4 a standby fore node2.
If possible you would be better to have package1 running on node1 and package2 running on node2 with node3 a standby for node1 and node4 a standby fore node2.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2004 02:33 PM
11-12-2004 02:33 PM
Re: Configure Cluster --load balancing.
Hi,
Only possible if you can split your application to run on multiple nodes. Then you can package them and run through serviceguard. As long as your application supports that configuration, ServiceGuard shouldn't have any issues with it.
Oracle RAC, MQ/cluster etc., have the capability to load share.
-Sri
Only possible if you can split your application to run on multiple nodes. Then you can package them and run through serviceguard. As long as your application supports that configuration, ServiceGuard shouldn't have any issues with it.
Oracle RAC, MQ/cluster etc., have the capability to load share.
-Sri
You may be disappointed if you fail, but you are doomed if you don't try
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2004 11:31 PM
11-12-2004 11:31 PM
Re: Configure Cluster --load balancing.
Well...
if you already have a specific package that runs only on HPUX, then the next is not for you.
But,
if you are to set up an environment that needs such growth capacity that you might end up in needing the power of more than one node simultaniously, (like the config in the original question obviously is) then have a look again at VMS (officially called OpenVMS, nowadays also by HP).
ANY software that is NOT written specifically to PREVENT running multi-access (by explicitly specifying single-process access to data, which regrettably several database management software packages do), ALWAYS will run soncurrently on ANY number of cluster nodes. (supported up to 96 nodes, many more sometimes seen "in the wild").
.. and those nodes may be 800 KM ( = 500 miles) separated.
Well, probably the great majority of readers of this forum already have their systems running, and it would be difficult to justify such move, but for those that are in a planning phase, it could well be worthwhile to realise that there DO exist interesting solutions!
fwiw,
Cheers.
Have one on me.
Jan
if you already have a specific package that runs only on HPUX, then the next is not for you.
But,
if you are to set up an environment that needs such growth capacity that you might end up in needing the power of more than one node simultaniously, (like the config in the original question obviously is) then have a look again at VMS (officially called OpenVMS, nowadays also by HP).
ANY software that is NOT written specifically to PREVENT running multi-access (by explicitly specifying single-process access to data, which regrettably several database management software packages do), ALWAYS will run soncurrently on ANY number of cluster nodes. (supported up to 96 nodes, many more sometimes seen "in the wild").
.. and those nodes may be 800 KM ( = 500 miles) separated.
Well, probably the great majority of readers of this forum already have their systems running, and it would be difficult to justify such move, but for those that are in a planning phase, it could well be worthwhile to realise that there DO exist interesting solutions!
fwiw,
Cheers.
Have one on me.
Jan
Don't rust yours pelled jacker to fine doll missed aches.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP