BladeSystem - General
1827713 Members
2643 Online
109967 Solutions
New Discussion

Re: Operating System-Based (OS) clustering on blades

 
chuckk281
Trusted Contributor

Operating System-Based (OS) clustering on blades

Mike was looking for help for his customer on clustering best practices:

 

************************

 

One of my good blade customers has just taken over management of a group of critical (Dell) rackmount servers at his site that run OS-based (Microsoft) cluster services.  These are Database (DB) servers that are critical to this organizations success.  His current environment is BL460/BL490s in many blade enclosures.  All enclosures have VC-Flex 10 modules, and most of them have VC-FC modules as well.  Most of the current nodes are part of VMware clusters. 

 

Remembering back a year or so ago when we had a VC firmware issue (had to do with DNS; if one VC module failed in an enclosure it took down the other one as well), he’s asked me if it’s a good idea to migrate these critical rackmount servers to his blade environment.  My immediate response was that this would be a recommended approach, but I had to admit that most of my customers recently are clustering only VMware, not physical servers.

 

Does anyone have any recent relevant experience with this type of environment that would suggest I’m steering this customer right/wrong?  He was particularly concerned about the availability of the heartbeat network, since the physical nodes would be installed in separate enclosures that will not be stacked.

 

**************

 

Some good discussion:

 

From Chad:

Even if they weren’t using proper stacking links, you could use stacking cables to create an internal network across 2 enclosures (create the SUS or vNet first, add the ports that you will connect, then connect them). Just make sure your blades are in different enclosures and you’re good to go.

 

Input from Dan:

The last place I worked we just kept the different Cluster nodes in different enclosures and used 1Gb Pass-Thru modules with 1Gb Mezz cards for those servers. (Cisco 3120X in Bay 1 and 2).

Most machines did not need that layer so Pass-Thru and 1Gb Mezz was the cheapest option.

 

As Chad said, with Flex-10 you can do this virtually with a single FlexNIC.

Just create a Network, assign it to the uplink port, put a 1Gb RJ45 SFP into that uplink port, wire that directly to the other enclosure and setup the other enclosure the exact same way.

In essence you are creating a Crossover connection like he probably already has on his Dells but with VC in the middle.

 

When you go back to back like this, you want to go with redundant paths and set the Failover mode from Auto back to Failover (manual).  LACP won’t take care of all this for you is what I have been told by those smarter than me.

 

And from Cullen:

I’ve been involved with a project that has been supplying clustered servers on blades to multiple customers for the last 5 years.  We currently use Virtual Connect Flex-10 and route the heartbeat and data traffic through two different VLANs.  Contrary to much “best practice”, we don’t directly wire the heartbeat because:

-          We wanted the flexibility to be able to run different servers in different enclosures

-          We wanted to eventually use > 2 nodes per cluster

-          We needed to be able to do stretch clusters across 2 data centers

-          We didn’t want to spend the money for a separate network infrastructure for the heartbeat

 

In theory, a sufficiently severe network outage could take out the entire cluster.  However, we have only seen this once on Veritas Cluster Service and believe this was solved by increasing the timeout.

 

I make no particular recommendations – it depends on their risk tolerance and their cost tolerance.

 

*****************

 

Any other suggestions or comments?

1 REPLY 1
chuckk281
Trusted Contributor

Re: Operating System-Based (OS) clustering on blades

Some additional thoughts on the subject:

 

From Chris J.:

For ultimate in resiliency I would use separate non-stacked enclosures in separate racks.

 

The issue with stacked enclosures is firmware upgrade process.  No matter what manuals say, it is a single point of failure I believe this is a big risk.

 

So, your intercluster traffic would need to go through VC to external switches and then through VC to the node in the other enclosure.  But this is fine, because switches and links can be redundant.

 

Once you have this configuration it should have resiliency equal to the one built out of rack-mounted servers.

 

*****************

 

And from Chris L.:

“No matter what manuals say, it is a single point of failure I believe this is a big risk.”

 

This is the same with upstream switching environment.  Granted, VC is quite a bit more visible, but to ignore the fact if someone were to perform a firmware upgrade of the upstream switches (which are typically in a logical cluster; vPC, IRF, etc.), and the firmware has a bug, should also count that as a “big risk.”  VC Firmware updates have dramatically improved since the 2.3x and 3.1x days.  If a customer wants to reduce their failure domain, it would be recommended to split a MES into separate, individual VC Domains.  But that will also increase management overhead (multiple VCM instances, which can be mitigated by the use of VCEM.)

 

***************