<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: OpenVMS Cluster 7.3-2 in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448191#M95291</link>
    <description>Rob,&lt;BR /&gt;&lt;BR /&gt;Do be careful about your quorum. In a two node cluster, without a dual host accessible storage unit, the quorum disk will be directly connected to one machine, with no potential alternate connection. &lt;BR /&gt;&lt;BR /&gt;If the machine without a direct connection fails, the other system will remain. If the machine with the cluster quorum disk fails, the other node will fail also.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
    <pubDate>Fri, 26 Jun 2009 18:52:13 GMT</pubDate>
    <dc:creator>Robert Gezelter</dc:creator>
    <dc:date>2009-06-26T18:52:13Z</dc:date>
    <item>
      <title>OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448183#M95283</link>
      <description>I have 2 DS15's with 2 SN-KZPCA-AA's in each, and 2 storage shelves.  Can I share that between the 2 systems and create a cluster or are those controllers not supported to do that?</description>
      <pubDate>Fri, 26 Jun 2009 17:09:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448183#M95283</guid>
      <dc:creator>Robert Brothers</dc:creator>
      <dc:date>2009-06-26T17:09:58Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448184#M95284</link>
      <description>Rob,&lt;BR /&gt;&lt;BR /&gt;Cluster communications would actually go over the Ethernet connection between the two systems.&lt;BR /&gt;&lt;BR /&gt;At a minimum, the disks would be visible as served volumes. What are the KZPCAs connected to (precisely)?&lt;BR /&gt;&lt;BR /&gt;What are the actual storage shelves?&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 26 Jun 2009 17:50:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448184#M95284</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-06-26T17:50:19Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448185#M95285</link>
      <description>You can cluster these systems via Ethernet with full support, though (per the SPD) (and the likely intent of your question) you cannot use KZPCA series SCSI controllers for multi-host multi-initator shared-bus SCSI configurations.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/12700/SPDClusters.pdf" target="_blank"&gt;http://docs.hp.com/en/12700/SPDClusters.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;And if you're not already aware of this detail, no cluster communications occur between hosts via (shared) SCSI bus; SCSI is a cluster storage bus and not a cluster communications bus.  Even with multi-host SCSI configurations, you must have a cluster communications bus.  An Ethernet network can (and often does) fulfill that clustering communications requirement.</description>
      <pubDate>Fri, 26 Jun 2009 17:57:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448185#M95285</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-06-26T17:57:00Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448186#M95286</link>
      <description>DS-SL13R-AA are the storage shelves.  I know I could use some Y and SCSI cables and connect everything together so both systems see it.&lt;BR /&gt;</description>
      <pubDate>Fri, 26 Jun 2009 17:57:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448186#M95286</guid>
      <dc:creator>Robert Brothers</dc:creator>
      <dc:date>2009-06-26T17:57:49Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448187#M95287</link>
      <description>I do have the 2 on board 10/100 nics and I have the option to use CCMAA-BA's for cluster communication. I was curious if I could share the SCSI buss so both systems would se the drives.</description>
      <pubDate>Fri, 26 Jun 2009 17:59:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448187#M95287</guid>
      <dc:creator>Robert Brothers</dc:creator>
      <dc:date>2009-06-26T17:59:58Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448188#M95288</link>
      <description>Rob,&lt;BR /&gt;&lt;BR /&gt;I concur with Hoff, the SPD does not mention that the KZPCA can be used in a multi-host configuration.&lt;BR /&gt;&lt;BR /&gt;Read the SPD that Hoff referenced in detail.&lt;BR /&gt;&lt;BR /&gt;On the other hand, the swapping the controllers is not particularly a complex project.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 26 Jun 2009 18:09:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448188#M95288</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-06-26T18:09:28Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448189#M95289</link>
      <description>You can access all disk drives across both AlphaServer DS15 series OpenVMS Alpha V7.3-2 systems if and when you have clustering enabled; there is no need to have multi-host shared-bus SCSI to have shared disk access.  &lt;BR /&gt;&lt;BR /&gt;And you can't have a shared-bus configuration here.&lt;BR /&gt;&lt;BR /&gt;If I were going to pursue this, I'd get GbE NICs and a pair of supported multi-host SCSI controllers.&lt;BR /&gt;&lt;BR /&gt;Memory Channel?  That works, that works well for certain cluster communications loads and not others.  Not my choice of interconnect though, save for specific loads.  (Do ensure you have current ECOs here, particularly if you head toward MC.)  GbE does very well against MC, too; &lt;BR /&gt;&lt;BR /&gt;Verrell Boaen had some detailed (CPU and latency and performance) comparisons of these interconnects over the years, but I don't have a copy handy.  Somebody at HP may well have a copy stashed away.&lt;BR /&gt;&lt;BR /&gt;That you even have memory channel widgets around implies you have a fairly extensive and well-stocked spare parts bin around.  Which is where I'd look for a multi-host SCSI controller and GbE NICs, if you don't already have same.&lt;BR /&gt;</description>
      <pubDate>Fri, 26 Jun 2009 18:42:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448189#M95289</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-06-26T18:42:50Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448190#M95290</link>
      <description>Great guys you gave me the info I need.  I will grab a coule gige cards and call it a day and leave the scsi single host.</description>
      <pubDate>Fri, 26 Jun 2009 18:45:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448190#M95290</guid>
      <dc:creator>Robert Brothers</dc:creator>
      <dc:date>2009-06-26T18:45:56Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448191#M95291</link>
      <description>Rob,&lt;BR /&gt;&lt;BR /&gt;Do be careful about your quorum. In a two node cluster, without a dual host accessible storage unit, the quorum disk will be directly connected to one machine, with no potential alternate connection. &lt;BR /&gt;&lt;BR /&gt;If the machine without a direct connection fails, the other system will remain. If the machine with the cluster quorum disk fails, the other node will fail also.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 26 Jun 2009 18:52:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448191#M95291</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-06-26T18:52:13Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448192#M95292</link>
      <description>Locating a quorum disk on a non-shared and non-multi-host bus seems an odd choice.  &lt;BR /&gt;&lt;BR /&gt;If the OpenVMS host box for a single-path quorum disk is down, then the quorum disk is (also) down.&lt;BR /&gt;&lt;BR /&gt;To contribute votes, a quorum disk is best configured with non-served direct paths from two or more hosts.&lt;BR /&gt;&lt;BR /&gt;Here's a low-end cluster tutorial:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://labs.hoffmanlabs.com/node/569" target="_blank"&gt;http://labs.hoffmanlabs.com/node/569&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;And as Bob mentions, you do have some choices about votes and quorum and other such details to make here.&lt;BR /&gt;</description>
      <pubDate>Fri, 26 Jun 2009 19:13:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448192#M95292</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-06-26T19:13:38Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS Cluster 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448193#M95293</link>
      <description>First, I wanted to tell you what will make &lt;BR /&gt;it a cluster on the ethernet.&lt;BR /&gt;&lt;BR /&gt;NISCS_Load_PEA0 must be set to one.&lt;BR /&gt;&lt;BR /&gt;With no shared disks, there is absolutely&lt;BR /&gt;no point in a quorum disk.  If you can&lt;BR /&gt;get a workstation on the network it could&lt;BR /&gt;be the deciding vote.&lt;BR /&gt;&lt;BR /&gt;Otherwise, decide on the most important system&lt;BR /&gt;give it one vote, expected votes 1 and that system must be up. The other would get no votes.&lt;BR /&gt;&lt;BR /&gt;It you add a workstation, each node could get 1 vote, and any 2 votes would keep the cluster happy.   But since you are not sharing any data, it probably is best to simplify things and the one that has the critical data should be the only voting member.&lt;BR /&gt;&lt;BR /&gt;Also, it knows which cluster it is part of&lt;BR /&gt;with cluster_authorization.dat&lt;BR /&gt;It consists of a cluster group number,&lt;BR /&gt;WHICH MUST BE UNIQUE IN YOUR NETWORK,&lt;BR /&gt;and a password.    &lt;BR /&gt;&lt;BR /&gt;The cluster_authorization.dat is what tells&lt;BR /&gt;what cluster a node belongs to and allows&lt;BR /&gt;multiple clusters on the same network.&lt;BR /&gt;&lt;BR /&gt;You can copy the file cluster_authorization.dat over to the other system if you don't remember the password.&lt;BR /&gt;&lt;BR /&gt;If another cluster in your network has the same group number, you'll get thousands of network errors.&lt;BR /&gt;&lt;BR /&gt;The cluster group number and password is initially created when you run cluster_authorization.dat &lt;BR /&gt;&lt;BR /&gt;or MCR SYSMAN&amp;gt;&lt;BR /&gt;   help set cluster_authorization&lt;BR /&gt;for the syntax.&lt;BR /&gt;&lt;BR /&gt;finally, no disks in the cluster, regardless if they are only seen by one system must&lt;BR /&gt;have unique volume names.&lt;BR /&gt;&lt;BR /&gt;Now, you can serve the disks on one node&lt;BR /&gt;over the network to the other node in the&lt;BR /&gt;cluster.  Obviously that's slow.&lt;BR /&gt;&lt;BR /&gt;You mentioned 2 Ethernet controllers. It would be great to connect the two node&lt;BR /&gt;either through a controller or simply a turnaround connector.   Then give it a higher&lt;BR /&gt;priority than the other controller.&lt;BR /&gt;&lt;BR /&gt;What advantage does clustering these nodes&lt;BR /&gt;give you?&lt;BR /&gt;&lt;BR /&gt;Bob Comarow</description>
      <pubDate>Sat, 27 Jun 2009 01:18:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-7-3-2/m-p/4448193#M95293</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2009-06-27T01:18:31Z</dc:date>
    </item>
  </channel>
</rss>

