1827849 Members
1858 Online
109969 Solutions
New Discussion

OpenVMS Cluster 7.3-2

 
Robert Brothers
Frequent Advisor

OpenVMS Cluster 7.3-2

I have 2 DS15's with 2 SN-KZPCA-AA's in each, and 2 storage shelves. Can I share that between the 2 systems and create a cluster or are those controllers not supported to do that?
10 REPLIES 10
Robert Gezelter
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Rob,

Cluster communications would actually go over the Ethernet connection between the two systems.

At a minimum, the disks would be visible as served volumes. What are the KZPCAs connected to (precisely)?

What are the actual storage shelves?

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: OpenVMS Cluster 7.3-2

You can cluster these systems via Ethernet with full support, though (per the SPD) (and the likely intent of your question) you cannot use KZPCA series SCSI controllers for multi-host multi-initator shared-bus SCSI configurations.

http://docs.hp.com/en/12700/SPDClusters.pdf

And if you're not already aware of this detail, no cluster communications occur between hosts via (shared) SCSI bus; SCSI is a cluster storage bus and not a cluster communications bus. Even with multi-host SCSI configurations, you must have a cluster communications bus. An Ethernet network can (and often does) fulfill that clustering communications requirement.
Robert Brothers
Frequent Advisor

Re: OpenVMS Cluster 7.3-2

DS-SL13R-AA are the storage shelves. I know I could use some Y and SCSI cables and connect everything together so both systems see it.
Robert Brothers
Frequent Advisor

Re: OpenVMS Cluster 7.3-2

I do have the 2 on board 10/100 nics and I have the option to use CCMAA-BA's for cluster communication. I was curious if I could share the SCSI buss so both systems would se the drives.
Robert Gezelter
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Rob,

I concur with Hoff, the SPD does not mention that the KZPCA can be used in a multi-host configuration.

Read the SPD that Hoff referenced in detail.

On the other hand, the swapping the controllers is not particularly a complex project.

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: OpenVMS Cluster 7.3-2

You can access all disk drives across both AlphaServer DS15 series OpenVMS Alpha V7.3-2 systems if and when you have clustering enabled; there is no need to have multi-host shared-bus SCSI to have shared disk access.

And you can't have a shared-bus configuration here.

If I were going to pursue this, I'd get GbE NICs and a pair of supported multi-host SCSI controllers.

Memory Channel? That works, that works well for certain cluster communications loads and not others. Not my choice of interconnect though, save for specific loads. (Do ensure you have current ECOs here, particularly if you head toward MC.) GbE does very well against MC, too;

Verrell Boaen had some detailed (CPU and latency and performance) comparisons of these interconnects over the years, but I don't have a copy handy. Somebody at HP may well have a copy stashed away.

That you even have memory channel widgets around implies you have a fairly extensive and well-stocked spare parts bin around. Which is where I'd look for a multi-host SCSI controller and GbE NICs, if you don't already have same.
Robert Brothers
Frequent Advisor

Re: OpenVMS Cluster 7.3-2

Great guys you gave me the info I need. I will grab a coule gige cards and call it a day and leave the scsi single host.
Robert Gezelter
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Rob,

Do be careful about your quorum. In a two node cluster, without a dual host accessible storage unit, the quorum disk will be directly connected to one machine, with no potential alternate connection.

If the machine without a direct connection fails, the other system will remain. If the machine with the cluster quorum disk fails, the other node will fail also.

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: OpenVMS Cluster 7.3-2

Locating a quorum disk on a non-shared and non-multi-host bus seems an odd choice.

If the OpenVMS host box for a single-path quorum disk is down, then the quorum disk is (also) down.

To contribute votes, a quorum disk is best configured with non-served direct paths from two or more hosts.

Here's a low-end cluster tutorial:

http://labs.hoffmanlabs.com/node/569

And as Bob mentions, you do have some choices about votes and quorum and other such details to make here.
comarow
Trusted Contributor

Re: OpenVMS Cluster 7.3-2

First, I wanted to tell you what will make
it a cluster on the ethernet.

NISCS_Load_PEA0 must be set to one.

With no shared disks, there is absolutely
no point in a quorum disk. If you can
get a workstation on the network it could
be the deciding vote.

Otherwise, decide on the most important system
give it one vote, expected votes 1 and that system must be up. The other would get no votes.

It you add a workstation, each node could get 1 vote, and any 2 votes would keep the cluster happy. But since you are not sharing any data, it probably is best to simplify things and the one that has the critical data should be the only voting member.

Also, it knows which cluster it is part of
with cluster_authorization.dat
It consists of a cluster group number,
WHICH MUST BE UNIQUE IN YOUR NETWORK,
and a password.

The cluster_authorization.dat is what tells
what cluster a node belongs to and allows
multiple clusters on the same network.

You can copy the file cluster_authorization.dat over to the other system if you don't remember the password.

If another cluster in your network has the same group number, you'll get thousands of network errors.

The cluster group number and password is initially created when you run cluster_authorization.dat

or MCR SYSMAN>
help set cluster_authorization
for the syntax.

finally, no disks in the cluster, regardless if they are only seen by one system must
have unique volume names.

Now, you can serve the disks on one node
over the network to the other node in the
cluster. Obviously that's slow.

You mentioned 2 Ethernet controllers. It would be great to connect the two node
either through a controller or simply a turnaround connector. Then give it a higher
priority than the other controller.

What advantage does clustering these nodes
give you?

Bob Comarow