HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: I want to implement a test cluster.
Operating System - HP-UX
1833757
Members
2741
Online
110063
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-17-2010 01:26 AM
04-17-2010 01:26 AM
I want to implement a test cluster.
Dear,
Is it possible to do service guard cluster in two servers(test1 and test2) in which service guard is installed.but the problem is that i have no access for these servers in storage,can i provide another servers local disk for the same use (test 3).
Please help me....
Is it possible to do service guard cluster in two servers(test1 and test2) in which service guard is installed.but the problem is that i have no access for these servers in storage,can i provide another servers local disk for the same use (test 3).
Please help me....
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-17-2010 02:12 AM
04-17-2010 02:12 AM
Re: I want to implement a test cluster.
Serviceguard requires either a quorum server or a lock disk that is accessible at the SCSI level (= not NFS) by all the nodes.
A quorum server is a small piece of software you can run on any HP-UX or Linux system that is *not* part of the cluster(s) it's serving.
If you install a quorum server on your test3, you can build a Serviceguard cluster on test1 and test2, using test3 as a quorum server and NFS server.
This would allow you to:
- learn/test the fundamentals of Serviceguard cluster setup
- learn/test most parts of Serviceguard package setup
- test application integration to Serviceguard, assuming that the application uses package storage through regular filesystem access only
With this setup, you *cannot*:
- learn/test how to set up shared SCSI/FibreChannel disks (this is an important part of many Serviceguard set-ups)
- create any packages that use raw disk storage, because NFS does not allow raw access (no databases with raw disks, for example)
- learn/test a CFS (Cluster FileSystem)
Note: NFS allows some things that are impossible in a shared SCSI/FC disk setup, like accessing the same filesystem from two or more nodes without special cluster filesystem. You should be aware of these differences and make sure it won't invalidate whatever you're trying to test or learn.
So, it's technically possible, but not as useful as a test cluster with real shared SCSI/FC disks.
MK
A quorum server is a small piece of software you can run on any HP-UX or Linux system that is *not* part of the cluster(s) it's serving.
If you install a quorum server on your test3, you can build a Serviceguard cluster on test1 and test2, using test3 as a quorum server and NFS server.
This would allow you to:
- learn/test the fundamentals of Serviceguard cluster setup
- learn/test most parts of Serviceguard package setup
- test application integration to Serviceguard, assuming that the application uses package storage through regular filesystem access only
With this setup, you *cannot*:
- learn/test how to set up shared SCSI/FibreChannel disks (this is an important part of many Serviceguard set-ups)
- create any packages that use raw disk storage, because NFS does not allow raw access (no databases with raw disks, for example)
- learn/test a CFS (Cluster FileSystem)
Note: NFS allows some things that are impossible in a shared SCSI/FC disk setup, like accessing the same filesystem from two or more nodes without special cluster filesystem. You should be aware of these differences and make sure it won't invalidate whatever you're trying to test or learn.
So, it's technically possible, but not as useful as a test cluster with real shared SCSI/FC disks.
MK
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-17-2010 04:28 AM
04-17-2010 04:28 AM
Re: I want to implement a test cluster.
Dear,
Thanks for your quick reply so from where i should start......:)
Can anyone guide me to do the above......
it would be really helpful for me...
Thanks for your quick reply so from where i should start......:)
Can anyone guide me to do the above......
it would be really helpful for me...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-18-2010 02:32 AM
04-18-2010 02:32 AM
Re: I want to implement a test cluster.
The HP manual "Managing Serviceguard" is designed to give a HP-UX sysadmin enough knowledge to set up his/her first Serviceguard cluster, even if he/she has zero knowledge about clusters when opening the book for the first time.
So, please, READ THAT BOOK. Really.
A printed version of that book usually comes with each Serviceguard license - but if you don't have it, the book is available in the High Availability section of the docs.hp.com website.
http://docs.hp.com/en/ha.html#Serviceguard
That book is the "standard" for Serviceguard administrators: all other documentation is written with the assumption that the sysadmin already knows and understands what's in that book.
Serviceguard is very flexible: its configuration greatly depends on the structure of your network and on what you wish to do with it. The "Managing Serviceguard" book describes in great detail how to find out all the things you need to know about your network structure, and how to apply them into your Serviceguard configuration, step by step.
Because Serviceguard is so flexible, there is no "one-size-fits-all" solution that would be applicable to every environment.
In this specific case, you're planning to use Serviceguard a bit differently from the way it was designed to work.
Here's a short description of what you'll need to do. I'm afraid it won't be of much use to you until you've read "Managing Serviceguard", but if I had to explain everything from the ground up, I would end up re-writing half of the "Managing Serviceguard" book here... and that's so big job I wouldn't do it for free.
Part 1: cluster setup
First install the quorum server on test3.
Then set up Serviceguard on test1 & test2. This will be a standard two-node cluster setup. There are normally two options for cluster lock, but because you don't have any shared SCSI/FC disks, you must pick the quorum server as your cluster lock mechanism.
Part 2: package setup
Here's where you must be a bit tricky.
A Serviceguard package can essentially contains three kinds of things:
- zero or more IP addresses that move around along with the package
- zero or more filesystems on shared LVM or VxVM volume groups
- zero or more commands to execute when a package is going up or down.
Note that all these parts are *optional*: you can create a package that contains none of these things, but such a package would not be very useful :)
Because you don't have shared SCSI/FC storage, you cannot have shared LVM or VxVM volume groups. For NFS mounts, all the VG activation/deactivation and filesystem check procedures used with SCSI/FC disks are inappropriate.
So you're going to leave out the filesystem part of the package configuration, and instead put any NFS mount/unmount commands you want to the "commands to execute" part. This part is known as "customer defined start/stop commands" in a legacy-style configuration (which was the only way to do things in Serviceguard A.11.17 and below), and "external_script" in new modular-style configuration (the preferred form in Serviceguard A.11.18 and newer).
I assume you don't need any help in setting up NFS server side on test3 so that test1 and test2 can access its filesystem(s). If you do, search this forum before asking a new question: you'll find many examples.
MK
So, please, READ THAT BOOK. Really.
A printed version of that book usually comes with each Serviceguard license - but if you don't have it, the book is available in the High Availability section of the docs.hp.com website.
http://docs.hp.com/en/ha.html#Serviceguard
That book is the "standard" for Serviceguard administrators: all other documentation is written with the assumption that the sysadmin already knows and understands what's in that book.
Serviceguard is very flexible: its configuration greatly depends on the structure of your network and on what you wish to do with it. The "Managing Serviceguard" book describes in great detail how to find out all the things you need to know about your network structure, and how to apply them into your Serviceguard configuration, step by step.
Because Serviceguard is so flexible, there is no "one-size-fits-all" solution that would be applicable to every environment.
In this specific case, you're planning to use Serviceguard a bit differently from the way it was designed to work.
Here's a short description of what you'll need to do. I'm afraid it won't be of much use to you until you've read "Managing Serviceguard", but if I had to explain everything from the ground up, I would end up re-writing half of the "Managing Serviceguard" book here... and that's so big job I wouldn't do it for free.
Part 1: cluster setup
First install the quorum server on test3.
Then set up Serviceguard on test1 & test2. This will be a standard two-node cluster setup. There are normally two options for cluster lock, but because you don't have any shared SCSI/FC disks, you must pick the quorum server as your cluster lock mechanism.
Part 2: package setup
Here's where you must be a bit tricky.
A Serviceguard package can essentially contains three kinds of things:
- zero or more IP addresses that move around along with the package
- zero or more filesystems on shared LVM or VxVM volume groups
- zero or more commands to execute when a package is going up or down.
Note that all these parts are *optional*: you can create a package that contains none of these things, but such a package would not be very useful :)
Because you don't have shared SCSI/FC storage, you cannot have shared LVM or VxVM volume groups. For NFS mounts, all the VG activation/deactivation and filesystem check procedures used with SCSI/FC disks are inappropriate.
So you're going to leave out the filesystem part of the package configuration, and instead put any NFS mount/unmount commands you want to the "commands to execute" part. This part is known as "customer defined start/stop commands" in a legacy-style configuration (which was the only way to do things in Serviceguard A.11.17 and below), and "external_script" in new modular-style configuration (the preferred form in Serviceguard A.11.18 and newer).
I assume you don't need any help in setting up NFS server side on test3 so that test1 and test2 can access its filesystem(s). If you do, search this forum before asking a new question: you'll find many examples.
MK
MK
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP