<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Two Data Center SG Cluster in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041474#M693243</link>
    <description>HP should be testing some longer distance cluster solutions with the Veritas (now Symantec) product set.   Currently, if you go  beyond 10km, your options are limited.   It's not really a technical reason, but more of a support/test reason.  Veritas wouldn't certify beyond 10km, and HP won't certify &amp;gt;2 nodes using LVM.  &lt;BR /&gt;&lt;BR /&gt;I'm hoping that we have a 50km solution (supported) within the next few years.   I like the simplicity of using Mirrordisk to keep data in sync, but you can also consider other options, such as Oracle Dataguard, or hardware solutions, such as HP's CA (continuous access), or EMC's srdf.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;-tjh</description>
    <pubDate>Sat, 21 Apr 2007 15:03:47 GMT</pubDate>
    <dc:creator>Thomas J. Harrold</dc:creator>
    <dc:date>2007-04-21T15:03:47Z</dc:date>
    <item>
      <title>Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041468#M693237</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;We have some SG clusters in existence already, but they are same-site clusters using shared SAN Disk at those sites.   Would anyone be able to weigh in with testimonials or knowledge on advantages or disadvantages of setting up a 2-Node ServiceGuard Cluster with each node being at 2 physical sites? We have found some documentation and know this is supposed to help with fault tolerance, but really need to know about things to look out for, especially if they are performance related.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance,&lt;BR /&gt;&lt;BR /&gt;KPS</description>
      <pubDate>Thu, 19 Apr 2007 17:00:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041468#M693237</guid>
      <dc:creator>KPS</dc:creator>
      <dc:date>2007-04-19T17:00:52Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041469#M693238</link>
      <description>It's rather difficult to comment when you don't bother to point out distances and data traffic load estimates. You need to decide on a Campus Clusters, Metro Clusters, or even Continental Clusters.&lt;BR /&gt;&lt;BR /&gt;This topic is discussed rather well in "Clusters for High Availability" by Peter S. Weygant. You should have received a copy with your SG documentation.&lt;BR /&gt;&lt;BR /&gt;One of the biggest decisions you will have to face is budgetary. High speed data link expenses can easily swamp the cost of equipment and software in short order. &lt;BR /&gt;&lt;BR /&gt;The other thing that you need to address is your environment. Until you have redundant HVAC, backup generators to augment your UPS's, robust and redundant networks and storage, you really don't even need to worry about SG. You buy SG so that you will never need it. It imposes a level a discipline on your organization such that SG itself seldom comes into play --- other than for planned outages and upgrades.</description>
      <pubDate>Thu, 19 Apr 2007 17:13:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041469#M693238</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-04-19T17:13:42Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041470#M693239</link>
      <description>The distance between the 2 datacenters will be approximately 28 miles.    As far as Data Load across the network, we are unsure about that at this time.&lt;BR /&gt;&lt;BR /&gt;Thanks for the reply on what you were able to speak to.&lt;BR /&gt;&lt;BR /&gt;KPS</description>
      <pubDate>Thu, 19 Apr 2007 19:42:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041470#M693239</guid>
      <dc:creator>KPS</dc:creator>
      <dc:date>2007-04-19T19:42:30Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041471#M693240</link>
      <description>Ok, that's a start and probably the next piece of the puzzle is latency. You might begin to think of placing 2 nodes with storage at location A and placing 2 nodes with storage at location B. If latency is a little less critical then lower bandwidth will suffice --- ie, can you tolerate the loss of one site if the other site is current up to some reasonable time in the past? You should also note that it would be perfectly reasonable to configure each of these 2 nodes at one site very asymetrically, ie. the normal fast box and a much cheaper and slower limp-along box --- this is a very reasonable SG approach for some applications.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Apr 2007 20:11:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041471#M693240</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-04-19T20:11:56Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041472#M693241</link>
      <description>Latency is something that I don't think we are capable of sacrificing and that seems to be the concern I keep having with a setup like this.   It looks like with a 2 site cluster you also have to use Mirror-UX or some kind of data mirroring/replication to keep the data at both sites current always since you will no longer have the shared storage optiosn?  Is this required and is the only way?   &lt;BR /&gt;&lt;BR /&gt;I would think there would be some overhead with that constantly happening across the network on both nodes of the cluster?   &lt;BR /&gt;&lt;BR /&gt;Comments, suggestions??&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;KPS</description>
      <pubDate>Fri, 20 Apr 2007 11:47:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041472#M693241</guid>
      <dc:creator>KPS</dc:creator>
      <dc:date>2007-04-20T11:47:09Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041473#M693242</link>
      <description>OK, you are really just beyond the range of a Metro Cluster --- and fundamentally the difference between Campus Clusters and Metro Clusters is the data replication technology. A Campus Cluster relies upon FibreChannel and Mirror/UX and a Metro Cluster relies upon EMC SRDF technology or HP's ESCON technology. In either case, the data replication for a Metro Cluster is handled behind the scenes from the point of view of the OS.&lt;BR /&gt;&lt;BR /&gt;You aren't going to believe this but it will probably be much cheaper to locate a data center with Campus Cluster range (~ 6 miles) even if you have to build a data center from scratch than it will be to run the kind of hi-speed network for any length of time that you seem to be implying to link your data centers at greater distances.&lt;BR /&gt;The only downside to a Campus Cluster location is increased probability that a single event (e.g. large earthquake) could take down both locations.&lt;BR /&gt; &lt;BR /&gt;When you want high availability coupled with low latency at a distance be prepared to spend large amounts of money -- both initially and as ongoing network expenses.&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Apr 2007 12:16:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041473#M693242</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-04-20T12:16:46Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041474#M693243</link>
      <description>HP should be testing some longer distance cluster solutions with the Veritas (now Symantec) product set.   Currently, if you go  beyond 10km, your options are limited.   It's not really a technical reason, but more of a support/test reason.  Veritas wouldn't certify beyond 10km, and HP won't certify &amp;gt;2 nodes using LVM.  &lt;BR /&gt;&lt;BR /&gt;I'm hoping that we have a 50km solution (supported) within the next few years.   I like the simplicity of using Mirrordisk to keep data in sync, but you can also consider other options, such as Oracle Dataguard, or hardware solutions, such as HP's CA (continuous access), or EMC's srdf.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;-tjh</description>
      <pubDate>Sat, 21 Apr 2007 15:03:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041474#M693243</guid>
      <dc:creator>Thomas J. Harrold</dc:creator>
      <dc:date>2007-04-21T15:03:47Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041475#M693244</link>
      <description>KPS,&lt;BR /&gt;&lt;BR /&gt;may I ask a question one step earlier in the decision-making?&lt;BR /&gt;Are you really (eg, by application software) tied to HPUX, or is there (relative) freedom of choice here?&lt;BR /&gt;If the first is the case, stop reading here.&lt;BR /&gt; &lt;BR /&gt;But if you HAVE some freedom, you might consider a VMS solution. Also by HP, also runs on IA64.&lt;BR /&gt;It offers DR configs for 2 or 3 locations, up to 1000 miles round-trip apart (literally out-of-the-box; it _IS_ the same software that runs the entry-level systems). And in VMS-speak DR does not mean Disaster Recovery, it means Disaster Resilience (ask those banks that had (part of) their computer room in one or both of the Twin Towers).&lt;BR /&gt;&lt;BR /&gt;You also ask about latency.&lt;BR /&gt;Of course, that reads as EXTRA latency, added by 28 miles or ~ 40 KM.&lt;BR /&gt;Expect no 5 decimal accuracy here, but as a first approximation: Assuming glass connections, (optical density ~ 1.5), the speed of the signal ~ 200 000 KM/sec, or 0.5 milliseconds single trip. Without real special trickery, normal IO requires 4 consecutive IOs to complete, so 28 miles adds 2 millisecs to your latency. Which is measurable, perhaps noticeable, but still rather less than most other components of IO time are contributing. &lt;BR /&gt;&lt;BR /&gt;hth&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Sun, 22 Apr 2007 03:52:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041475#M693244</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2007-04-22T03:52:17Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041476#M693245</link>
      <description>Thanks for the great responses to this. &lt;BR /&gt;  &lt;BR /&gt;We are not freedom-of-choice with the OS type that we plan to run due to some Application req's.&lt;BR /&gt;&lt;BR /&gt;Our plan is to run 2 rx8640's (IA-64) on&lt;BR /&gt;HP-UX 11.23.  That has been determined and we can't back out of that decision. &lt;BR /&gt;&lt;BR /&gt;Thanks again,&lt;BR /&gt;-KPS</description>
      <pubDate>Mon, 23 Apr 2007 07:18:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041476#M693245</guid>
      <dc:creator>KPS</dc:creator>
      <dc:date>2007-04-23T07:18:21Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041477#M693246</link>
      <description>What are your other requirements?  Do you NEED up to the transaction replicated at both sites, or could you get by with hourly replication? &lt;BR /&gt;&lt;BR /&gt;Despite the fact that it is not technically supported, I believe that mirrordisk/UX could handle distances &amp;gt;28 miles if you have a good enough data pipe between the sites.  &lt;BR /&gt;&lt;BR /&gt;If an HP supported solution is the requirement, then look to Oracle DataGuard, or a hardware mirroring solution, such as CA or SRDF &lt;BR /&gt;&lt;BR /&gt;-tjh</description>
      <pubDate>Mon, 23 Apr 2007 08:02:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041477#M693246</guid>
      <dc:creator>Thomas J. Harrold</dc:creator>
      <dc:date>2007-04-23T08:02:06Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041478#M693247</link>
      <description>We do need continuous Data Replication and do have the option to use EMC SRDF. Hourly replication just wouldn't cut it unfortunately.&lt;BR /&gt;&lt;BR /&gt;Thanks for the recommendations everyone, I think we now have gathered enough info to try and make a decision here with doing the Metro Cluster. &lt;BR /&gt;&lt;BR /&gt;Thanks to All!!!&lt;BR /&gt;&lt;BR /&gt;KPS</description>
      <pubDate>Mon, 23 Apr 2007 08:07:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041478#M693247</guid>
      <dc:creator>KPS</dc:creator>
      <dc:date>2007-04-23T08:07:45Z</dc:date>
    </item>
    <item>
      <title>Re: Two Data Center SG Cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041479#M693248</link>
      <description>.</description>
      <pubDate>Mon, 23 Apr 2007 08:08:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-data-center-sg-cluster/m-p/5041479#M693248</guid>
      <dc:creator>KPS</dc:creator>
      <dc:date>2007-04-23T08:08:19Z</dc:date>
    </item>
  </channel>
</rss>

