1837960 Members
2420 Online
110124 Solutions
New Discussion

Future for MC/SG

 
SOLVED
Go to solution

Future for MC/SG

I note with interest that the MC/SG implemntation for Linux doesn't provide 'cluster lock disk' functionality - it would appear you must use an arbitrator node.

Is this the new direction for MC/SG? Or is this just due to a limitation in Linux?

Also on the topic of MC/SG -Where can I get some info on System Multi-Node Packages - how to set up what to use them for etc...

Cheers

Duncan

I am an HPE Employee
Accept or Kudo
6 REPLIES 6
Sridhar Bhaskarla
Honored Contributor

Re: Future for MC/SG

Duncan,

I have no idea why cluster lock disk is not supported on Linux. It seems to me that the MC/Service Guard has been generalized on Linux. Arbitrator nodes come into picture for MC/Service Guard Metro and continental MC/Service Guard clusters.

I don't know if you already looked at these documents.

http://docs.hp.com/hpux/onlinedocs/B3936-90053/B3936-90053.html

For multisite clusters...

http://docs.hp.com/hpux/onlinedocs/B7660-90008/B7660-90008.html

-Sri



You may be disappointed if you fail, but you are doomed if you don't try

Re: Future for MC/SG

Yes, I've looked at the manuals...

Obviously system mult-node packages are used when you are using VxVM in your cluster, but can I use them myself?

Thx

Duncan

I am an HPE Employee
Accept or Kudo
melvyn burnard
Honored Contributor

Re: Future for MC/SG

The answer is that Linux has no concept of the idea of a cluster lock disc. To solve that issue, Linux/SG uses the idea of a quoum server, which is another node OUTSIDE the cluster, as opposed to the arbitrator nodes in Campus/Metro cluster that MUST be part of the cluster.
There is a possibility thath the Quorum server functionality may be also available for HP-UX/SG, but that is still to be decided/implemented.

As far as the MultiNode package is conncerned, what are you actually looking for?

My house is the bank's, my money the wife's, But my opinions belong to me, not HP!

Re: Future for MC/SG

Melvyn,

Thanks for that... as for what I am looking for, if I knew that I'd probably be a more complete person than I am ;o)

But seriously... I have nothing particular in mind, its just when new functionality is added to a product I use, I'm interested in what it can do... System Muli-Node packages appear to have been introduced specifically for CVM, and as far as I can see, the documentation doesn't give you any idea of how else they could be used (maybe because they can't!) Sorry if I'm not being specific enough to give me a meaningful reply

Regards

Duncan

I am an HPE Employee
Accept or Kudo
melvyn burnard
Honored Contributor
Solution

Re: Future for MC/SG

ah, that explains it. I just wanted to confirm what you were referring to.
The package you are talking about is needed to be setup on a cluster where you wish to use CVM. This package MUST run on each node participating in the cluster, and is used to monitor/confirm that the correct processes are there for the VxVM daemons to keep CVM running.

You may want to read through:

Managing MC/ServiceGuard at:
http://docs.hp.com/hpux/onlinedocs/B3936-90053/B3936-90053.html

The section heading:
Creating a Storage Infrastructure with CVM

will hopefully shed light on your question
My house is the bank's, my money the wife's, But my opinions belong to me, not HP!
Sridhar Bhaskarla
Honored Contributor

Re: Future for MC/SG

I am not quite confident that I understood your question well.

May be I am getting confused with your multi-node with multi-site cluster due to the word "arbitrator". If you are not interested in setting up multi-site cluster but are interested in multi-node cluster on hp-ux, it's simple. HP recommends to have a cluster lock for a three node cluster but is not mandatory but is dependent on the risk that you can afford to. Configuring a multi node cluster is done exactly the same way we configure a two node cluster except for the lock disk and lock volume group that are not mentioned in cluster config file.

If you are talking about setting the cluster on Linux, I mute.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try