- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: RHEL5 Clustering with c3000 BladeSystem and DL...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-22-2009 12:21 PM
10-22-2009 12:21 PM
RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
I'm new to BladeSystems and Red Hat Clustering, as we've only ever used single 1U boxes for each of our applications up until now.
I'm attempting to eliminate any SPF for a 24/7 customer-facing application serving 40k+ customers, and in this attempt I'm wanting to use resource sharing and failover clustering between two fully redundant (Power, Network, OA) c3000 enclosures.
Thus far, I've gone through the configuration and IP addressing in attempts to get a single two-node cluster running, using the blades' iLOs as their fence devices, and so far, I've had no luck.
I'm actually hoping that there might be some official documentation sitting around somewhere that I just haven't located yet, but my search continues.
Thanks in advance,
Kris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-22-2009 11:21 PM
10-22-2009 11:21 PM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
Disclaimer: I have no clue about RHEL. :-)
IMHO if you use two BL460c or two ProLiant boxes should not matter to the cluster, as the blades behave just like regular DL-type rack servers. So if you find some non-blade-specific documents on RHEL Clustering it should be applicable to your BL460c... good luck!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-26-2009 01:56 PM
10-26-2009 01:56 PM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
This should work, what problems are you having?
My experience is mostly with clustering on RHEL4 with a minimum of three nodes, but we do have some RHEL5 clusters at our site and quite a bit of expertise. If you can provide some more details I'll try to help.
As an aside it is not easy to elimitate a SPF. In our case it is the private network switch. There may be some VLAN magic to fix this but that's another conversation.
Jim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-26-2009 02:15 PM
10-26-2009 02:15 PM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
1.) For a two-node cluster, will I still need to setup a quorum on shared storage (I'll be using a NetApp NAS for shared storage, but it's not yet configured)? I've been trying to find some info in quorum settings for two-node clusters and I've seen a lot of mention of ties when there's only two nodes, and some things read as if a quorum disk was optional.
2.) When I'm defining the fence devices for the nodes, do I set their own iLO as their fence, or do I set another node's iLO as the fence?
Yes, at this point, I do still have a network switch that is an SPF, and unfortunately I don't think that will change for a while.
Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-28-2009 08:23 AM
10-28-2009 08:23 AM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
1.) No, a qourum disk is a special case and not required. A two node cluster is also a special case and you should have two lines in your cluster.conf file like this -
I've never tried this but you should be able to bring up your cluster without shared disk, you just won't be able start clvmd or gfs services.
2.) Yes, the fence device is the nodes own ILO address.
I assume you purchased your Red Hat cluster suite software, did you not get support with it?
Jim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-28-2009 10:11 AM
10-28-2009 10:11 AM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-28-2009 10:44 AM
10-28-2009 10:44 AM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
Yes, we purchase the basic support package with all of our RHEL purchases.
No, I've not yet tried using Red Hat's support. I started here first.
Thanks!
Kris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-28-2009 10:54 AM
10-28-2009 10:54 AM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
I'd be curious to see your cluster.conf, host file , and output of "cman_tool nodes" and "clustat" commands. I have a working RHEL5 2 node cluster here I can compare to.
Jim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2009 02:40 PM
10-29-2009 02:40 PM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
I am in very much the same situation as you. I am new to HP BladeSystems, and RH Cluster Suite and am just now configuring the cluster. My expertise is with IBM pseries and HACMP. I am using HPilo to fence and that works fine. Also, my 2-node cluster is up and running. I have configured IP addresses, NFS mounts, and Script resources at this point. We are using the VC Flex-10 ethernet modules and VC-FC SAN modules on the interconnect side. My problem is getting the SAN up and running and thus providing shared disk resources. I cannot even get the link lights going between the VC-FC module and the Cisco SAN switch. I will monitor this thread and perhaps we can help each other get through some of the issues that come up. The Red Hat Cluster documentation is awful.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2009 11:55 AM
10-30-2009 11:55 AM
Re: RHEL5 Clustering with c3000 BladeSystem and DL460c G6 Server Blades
I now have 3 functioning clusters. I still need to configure the failover domains and the relevant shared resources, but I don't expect those to be as difficult.
Some key things that I did make note of is that in the hosts file, you have to be careful when using a dedicated NIC for the cluster heartbeat so that you don't overlap any hostnames. For each two-node cluster, my hosts files contain a total of 6 IP addresses pointed to unique hostnames (3 per server blade).
I also had to make sure that SELinux was either disabled or in permissive mode so that it did not block anything. I'm still debating on whether I want to leave that as it is or create some rules so that it can be enabled.
I'll continue watching this thread and my return with other questions as get these things built.