Operating System - HP-UX
1832423 Members
3202 Online
110042 Solutions
New Discussion

Re: Two Data Center SG Cluster

 
SOLVED
Go to solution
KPS
Super Advisor

Two Data Center SG Cluster

Hi,

We have some SG clusters in existence already, but they are same-site clusters using shared SAN Disk at those sites. Would anyone be able to weigh in with testimonials or knowledge on advantages or disadvantages of setting up a 2-Node ServiceGuard Cluster with each node being at 2 physical sites? We have found some documentation and know this is supposed to help with fault tolerance, but really need to know about things to look out for, especially if they are performance related.

Thanks in advance,

KPS
11 REPLIES 11
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: Two Data Center SG Cluster

It's rather difficult to comment when you don't bother to point out distances and data traffic load estimates. You need to decide on a Campus Clusters, Metro Clusters, or even Continental Clusters.

This topic is discussed rather well in "Clusters for High Availability" by Peter S. Weygant. You should have received a copy with your SG documentation.

One of the biggest decisions you will have to face is budgetary. High speed data link expenses can easily swamp the cost of equipment and software in short order.

The other thing that you need to address is your environment. Until you have redundant HVAC, backup generators to augment your UPS's, robust and redundant networks and storage, you really don't even need to worry about SG. You buy SG so that you will never need it. It imposes a level a discipline on your organization such that SG itself seldom comes into play --- other than for planned outages and upgrades.
If it ain't broke, I can fix that.
KPS
Super Advisor

Re: Two Data Center SG Cluster

The distance between the 2 datacenters will be approximately 28 miles. As far as Data Load across the network, we are unsure about that at this time.

Thanks for the reply on what you were able to speak to.

KPS
A. Clay Stephenson
Acclaimed Contributor

Re: Two Data Center SG Cluster

Ok, that's a start and probably the next piece of the puzzle is latency. You might begin to think of placing 2 nodes with storage at location A and placing 2 nodes with storage at location B. If latency is a little less critical then lower bandwidth will suffice --- ie, can you tolerate the loss of one site if the other site is current up to some reasonable time in the past? You should also note that it would be perfectly reasonable to configure each of these 2 nodes at one site very asymetrically, ie. the normal fast box and a much cheaper and slower limp-along box --- this is a very reasonable SG approach for some applications.



If it ain't broke, I can fix that.
KPS
Super Advisor

Re: Two Data Center SG Cluster

Latency is something that I don't think we are capable of sacrificing and that seems to be the concern I keep having with a setup like this. It looks like with a 2 site cluster you also have to use Mirror-UX or some kind of data mirroring/replication to keep the data at both sites current always since you will no longer have the shared storage optiosn? Is this required and is the only way?

I would think there would be some overhead with that constantly happening across the network on both nodes of the cluster?

Comments, suggestions??

Thanks,
KPS
A. Clay Stephenson
Acclaimed Contributor

Re: Two Data Center SG Cluster

OK, you are really just beyond the range of a Metro Cluster --- and fundamentally the difference between Campus Clusters and Metro Clusters is the data replication technology. A Campus Cluster relies upon FibreChannel and Mirror/UX and a Metro Cluster relies upon EMC SRDF technology or HP's ESCON technology. In either case, the data replication for a Metro Cluster is handled behind the scenes from the point of view of the OS.

You aren't going to believe this but it will probably be much cheaper to locate a data center with Campus Cluster range (~ 6 miles) even if you have to build a data center from scratch than it will be to run the kind of hi-speed network for any length of time that you seem to be implying to link your data centers at greater distances.
The only downside to a Campus Cluster location is increased probability that a single event (e.g. large earthquake) could take down both locations.

When you want high availability coupled with low latency at a distance be prepared to spend large amounts of money -- both initially and as ongoing network expenses.
If it ain't broke, I can fix that.
Thomas J. Harrold
Trusted Contributor

Re: Two Data Center SG Cluster

HP should be testing some longer distance cluster solutions with the Veritas (now Symantec) product set. Currently, if you go beyond 10km, your options are limited. It's not really a technical reason, but more of a support/test reason. Veritas wouldn't certify beyond 10km, and HP won't certify >2 nodes using LVM.

I'm hoping that we have a 50km solution (supported) within the next few years. I like the simplicity of using Mirrordisk to keep data in sync, but you can also consider other options, such as Oracle Dataguard, or hardware solutions, such as HP's CA (continuous access), or EMC's srdf.


-tjh
I learn something new everyday. (usually because I break something new everyday)
Jan van den Ende
Honored Contributor

Re: Two Data Center SG Cluster

KPS,

may I ask a question one step earlier in the decision-making?
Are you really (eg, by application software) tied to HPUX, or is there (relative) freedom of choice here?
If the first is the case, stop reading here.

But if you HAVE some freedom, you might consider a VMS solution. Also by HP, also runs on IA64.
It offers DR configs for 2 or 3 locations, up to 1000 miles round-trip apart (literally out-of-the-box; it _IS_ the same software that runs the entry-level systems). And in VMS-speak DR does not mean Disaster Recovery, it means Disaster Resilience (ask those banks that had (part of) their computer room in one or both of the Twin Towers).

You also ask about latency.
Of course, that reads as EXTRA latency, added by 28 miles or ~ 40 KM.
Expect no 5 decimal accuracy here, but as a first approximation: Assuming glass connections, (optical density ~ 1.5), the speed of the signal ~ 200 000 KM/sec, or 0.5 milliseconds single trip. Without real special trickery, normal IO requires 4 consecutive IOs to complete, so 28 miles adds 2 millisecs to your latency. Which is measurable, perhaps noticeable, but still rather less than most other components of IO time are contributing.

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
KPS
Super Advisor

Re: Two Data Center SG Cluster

Thanks for the great responses to this.

We are not freedom-of-choice with the OS type that we plan to run due to some Application req's.

Our plan is to run 2 rx8640's (IA-64) on
HP-UX 11.23. That has been determined and we can't back out of that decision.

Thanks again,
-KPS
Thomas J. Harrold
Trusted Contributor

Re: Two Data Center SG Cluster

What are your other requirements? Do you NEED up to the transaction replicated at both sites, or could you get by with hourly replication?

Despite the fact that it is not technically supported, I believe that mirrordisk/UX could handle distances >28 miles if you have a good enough data pipe between the sites.

If an HP supported solution is the requirement, then look to Oracle DataGuard, or a hardware mirroring solution, such as CA or SRDF

-tjh
I learn something new everyday. (usually because I break something new everyday)
KPS
Super Advisor

Re: Two Data Center SG Cluster

We do need continuous Data Replication and do have the option to use EMC SRDF. Hourly replication just wouldn't cut it unfortunately.

Thanks for the recommendations everyone, I think we now have gathered enough info to try and make a decision here with doing the Metro Cluster.

Thanks to All!!!

KPS
KPS
Super Advisor

Re: Two Data Center SG Cluster

.