Operating System - Linux
1832760 Members
3404 Online
110045 Solutions
New Discussion

RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

 
Kristopher Linville
New Member

RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

I'm curious as to whether anyone has any experience creating a two-node cluster using Red Hat Cluster in RHEL5 between DL460c G6 Server Blades from two different c3000 BladeSystem Enclosures.

I'm new to BladeSystems and Red Hat Clustering, as we've only ever used single 1U boxes for each of our applications up until now.

I'm attempting to eliminate any SPF for a 24/7 customer-facing application serving 40k+ customers, and in this attempt I'm wanting to use resource sharing and failover clustering between two fully redundant (Power, Network, OA) c3000 enclosures.

Thus far, I've gone through the configuration and IP addressing in attempts to get a single two-node cluster running, using the blades' iLOs as their fence devices, and so far, I've had no luck.

I'm actually hoping that there might be some official documentation sitting around somewhere that I just haven't located yet, but my search continues.

Thanks in advance,

Kris
9 REPLIES 9
Michael Leu
Honored Contributor

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

Hi Kris

Disclaimer: I have no clue about RHEL. :-)

IMHO if you use two BL460c or two ProLiant boxes should not matter to the cluster, as the blades behave just like regular DL-type rack servers. So if you find some non-blade-specific documents on RHEL Clustering it should be applicable to your BL460c... good luck!
James Brand
Frequent Advisor

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

Kris,

This should work, what problems are you having?

My experience is mostly with clustering on RHEL4 with a minimum of three nodes, but we do have some RHEL5 clusters at our site and quite a bit of expertise. If you can provide some more details I'll try to help.

As an aside it is not easy to elimitate a SPF. In our case it is the private network switch. There may be some VLAN magic to fix this but that's another conversation.

Jim

Kristopher Linville
New Member

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

I believe that I still have some basic configuration questions that need to be answered. I was able to get the cluster to finally run, but with only one of the two nodes being recognized.

1.) For a two-node cluster, will I still need to setup a quorum on shared storage (I'll be using a NetApp NAS for shared storage, but it's not yet configured)? I've been trying to find some info in quorum settings for two-node clusters and I've seen a lot of mention of ties when there's only two nodes, and some things read as if a quorum disk was optional.

2.) When I'm defining the fence devices for the nodes, do I set their own iLO as their fence, or do I set another node's iLO as the fence?

Yes, at this point, I do still have a network switch that is an SPF, and unfortunately I don't think that will change for a while.

Thank you!
James Brand
Frequent Advisor

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

Kris,

1.) No, a qourum disk is a special case and not required. A two node cluster is also a special case and you should have two lines in your cluster.conf file like this -




I've never tried this but you should be able to bring up your cluster without shared disk, you just won't be able start clvmd or gfs services.

2.) Yes, the fence device is the nodes own ILO address.

I assume you purchased your Red Hat cluster suite software, did you not get support with it?

Jim
James Brand
Frequent Advisor

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

I just remembered that unlike RHEL4, RH cluster & GFS is bundled into RHEL5 at no extra cost. I'm still curious if you have any software support?
Kristopher Linville
New Member

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

It's not exactly bundled at no extra cost, but it is no longer a separate software. You are still required to purchase the Advance Platform license to be able to install the clustering packages.

Yes, we purchase the basic support package with all of our RHEL purchases.

No, I've not yet tried using Red Hat's support. I started here first.

Thanks!

Kris
James Brand
Frequent Advisor

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

Kris,

I'd be curious to see your cluster.conf, host file , and output of "cman_tool nodes" and "clustat" commands. I have a working RHEL5 2 node cluster here I can compare to.

Jim
Gregory Paulsen
New Member

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

Kristopher,
I am in very much the same situation as you. I am new to HP BladeSystems, and RH Cluster Suite and am just now configuring the cluster. My expertise is with IBM pseries and HACMP. I am using HPilo to fence and that works fine. Also, my 2-node cluster is up and running. I have configured IP addresses, NFS mounts, and Script resources at this point. We are using the VC Flex-10 ethernet modules and VC-FC SAN modules on the interconnect side. My problem is getting the SAN up and running and thus providing shared disk resources. I cannot even get the link lights going between the VC-FC module and the Cisco SAN switch. I will monitor this thread and perhaps we can help each other get through some of the issues that come up. The Red Hat Cluster documentation is awful.
Kristopher Linville
New Member

Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades

As suggested by our Sr. Windows Admin, I put the documentation (which is less than high-quality) away, blew away my configurations, and started over. This time, I just walked through the process and configured things as what seemed logical to me.

I now have 3 functioning clusters. I still need to configure the failover domains and the relevant shared resources, but I don't expect those to be as difficult.

Some key things that I did make note of is that in the hosts file, you have to be careful when using a dedicated NIC for the cluster heartbeat so that you don't overlap any hostnames. For each two-node cluster, my hosts files contain a total of 6 IP addresses pointed to unique hostnames (3 per server blade).

I also had to make sure that SELinux was either disabled or in permissive mode so that it did not block anything. I'm still debating on whether I want to leave that as it is or create some rules so that it can be enabled.

I'll continue watching this thread and my return with other questions as get these things built.