1748163 Members
3728 Online
108758 Solutions
New Discussion юеВ

connecting ilo2

 
mammadshah
Advisor

connecting ilo2

Hi,

i want to configure rhel clustering, i am using rhel 5.3, HP servers and HP storage. i need to know, how do i connect ilo2 between nodes?

7 REPLIES 7
Matti_Kurkela
Honored Contributor

Re: connecting ilo2

Welcome to ITRC Forums!

The iLO2 interface of each node should be reachable from the regular network interfaces of all the other nodes. It is also useful if the sysadmin can use the iLO2 connections for remote management purposes.

On the other hand, you should design your network so that a single fault can never break *all* the network connections between the cluster nodes.

In a two-node cluster it might be possible to use crossover cables to connect iLO of node A to a regular NIC of node B and vice versa, and dedicate one regular NIC on each node for fencing use only. However, this configuration makes it very difficult to add new nodes to the cluster, so it isn't recommended.

How many nodes does your cluster have, and what can you tell about your network configuration?

MK
MK
mammadshah
Advisor

Re: connecting ilo2

i want to install oracle on two node cluster, do i need to have quorum disk ?
and what if i need to add new HP server in cluster.

thanks.
Matti_Kurkela
Honored Contributor

Re: connecting ilo2

Yes you do. In a two-node cluster without a quorum disk, there is a risk of a fencing race:

If all heartbeat connections between the nodes are broken for some reason, both nodes will think: "I'm running, therefore I am fine. The other node just stopped sending heartbeats, so it may or may not have failed, but is certainly unreachable. I must fence it and take over its services."

Each node will try to fence the other one. This is an unstable situation where luck is a factor - and therefore it is not at all desirable in a high-availability cluster.


The quorum disk will provide the "third opinion", breaking the tie. The quorum disk daemons in the nodes will sort out the situation, and will give their vote(s) to only one or the other half of the split cluster.

In addition, the quorum disk daemon can perform additional tests with external targets to decide which half of the cluster seems more functional. When properly configured, the quorum disk system will ensure that the half of the cluster that has been isolated by the failure will get voted out (and fenced, just to be sure).

In a three-node cluster, a quorum disk configuration is not essential. But a three-node cluster becomes a two-node cluster every time you shut down one of the nodes for maintenance (pre-scheduled or otherwise). If one of the two remaining nodes fails just then, a quorum disk would still be useful in making sure the cluster behaves in a predictable fashion.

In a cluster with four or more nodes, the probability of exactly half the cluster losing all connectivity with the other half should be very small, if you've designed your cluster network connections right. You can still use a quorum disk if you feel your configuration requires it, but that would be a special case.

MK
MK
mammadshah
Advisor

Re: connecting ilo2

Gr8,
How do i create Quorum disk? do i create 500MB partiton on SAN, will it work as quorum disk?
Matti_Kurkela
Honored Contributor

Re: connecting ilo2

The quorum disk must be accessible by all the nodes, so a SAN storage LUN is exactly what is needed. It does not need to be big: 10 MB is the minimum required size. 500 MB is more than enough.

Please see:

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/s1-qdisk-considerations-CA.html

man qdiskd
man 5 qdisk
man mkqdisk

MK
MK
mammadshah
Advisor

Re: connecting ilo2

gr8 help,

i have 4 nodes and a SAN, and i want to setup 2 clusters, each 2 nodes will be grouped serving a sevice.

Is it necessary to have 2 Q-disk on a SAN, or I have to create 2 Q-disks on SAN.

Thanks.
Viktor Balogh
Honored Contributor

Re: connecting ilo2

no, a single quorum disk is enough for all the clusters.
****
Unix operates with beer.