Operating System - OpenVMS

OpenVMS Cluster 7.3-2

 
Robert Brothers
Frequent Advisor

OpenVMS Cluster 7.3-2

I have 2 DS15's with 2 SN-KZPCA-AA's in each, and 2 storage shelves. Can I share that between the 2 systems and create a cluster or are those controllers not supported to do that?
10 REPLIES 10
Robert Gezelter
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Rob,

Cluster communications would actually go over the Ethernet connection between the two systems.

At a minimum, the disks would be visible as served volumes. What are the KZPCAs connected to (precisely)?

What are the actual storage shelves?

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: OpenVMS Cluster 7.3-2

You can cluster these systems via Ethernet with full support, though (per the SPD) (and the likely intent of your question) you cannot use KZPCA series SCSI controllers for multi-host multi-initator shared-bus SCSI configurations.

http://docs.hp.com/en/12700/SPDClusters.pdf

And if you're not already aware of this detail, no cluster communications occur between hosts via (shared) SCSI bus; SCSI is a cluster storage bus and not a cluster communications bus. Even with multi-host SCSI configurations, you must have a cluster communications bus. An Ethernet network can (and often does) fulfill that clustering communications requirement.
Robert Brothers
Frequent Advisor

Re: OpenVMS Cluster 7.3-2

DS-SL13R-AA are the storage shelves. I know I could use some Y and SCSI cables and connect everything together so both systems see it.
Robert Brothers
Frequent Advisor

Re: OpenVMS Cluster 7.3-2

I do have the 2 on board 10/100 nics and I have the option to use CCMAA-BA's for cluster communication. I was curious if I could share the SCSI buss so both systems would se the drives.
Robert Gezelter
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Rob,

I concur with Hoff, the SPD does not mention that the KZPCA can be used in a multi-host configuration.

Read the SPD that Hoff referenced in detail.

On the other hand, the swapping the controllers is not particularly a complex project.

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: OpenVMS Cluster 7.3-2

You can access all disk drives across both AlphaServer DS15 series OpenVMS Alpha V7.3-2 systems if and when you have clustering enabled; there is no need to have multi-host shared-bus SCSI to have shared disk access.

And you can't have a shared-bus configuration here.

If I were going to pursue this, I'd get GbE NICs and a pair of supported multi-host SCSI controllers.

Memory Channel? That works, that works well for certain cluster communications loads and not others. Not my choice of interconnect though, save for specific loads. (Do ensure you have current ECOs here, particularly if you head toward MC.) GbE does very well against MC, too;

Verrell Boaen had some detailed (CPU and latency and performance) comparisons of these interconnects over the years, but I don't have a copy handy. Somebody at HP may well have a copy stashed away.

That you even have memory channel widgets around implies you have a fairly extensive and well-stocked spare parts bin around. Which is where I'd look for a multi-host SCSI controller and GbE NICs, if you don't already have same.
Robert Brothers
Frequent Advisor

Re: OpenVMS Cluster 7.3-2

Great guys you gave me the info I need. I will grab a coule gige cards and call it a day and leave the scsi single host.
Robert Gezelter
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Rob,

Do be careful about your quorum. In a two node cluster, without a dual host accessible storage unit, the quorum disk will be directly connected to one machine, with no potential alternate connection.

If the machine without a direct connection fails, the other system will remain. If the machine with the cluster quorum disk fails, the other node will fail also.

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Locating a quorum disk on a non-shared and non-multi-host bus seems an odd choice.

If the OpenVMS host box for a single-path quorum disk is down, then the quorum disk is (also) down.

To contribute votes, a quorum disk is best configured with non-served direct paths from two or more hosts.

Here's a low-end cluster tutorial:

http://labs.hoffmanlabs.com/node/569

And as Bob mentions, you do have some choices about votes and quorum and other such details to make here.