- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- OpenVMS Cluster 7.3-2
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 10:09 AM
06-26-2009 10:09 AM
OpenVMS Cluster 7.3-2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 10:50 AM
06-26-2009 10:50 AM
Re: OpenVMS Cluster 7.3-2
Cluster communications would actually go over the Ethernet connection between the two systems.
At a minimum, the disks would be visible as served volumes. What are the KZPCAs connected to (precisely)?
What are the actual storage shelves?
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 10:57 AM
06-26-2009 10:57 AM
Re: OpenVMS Cluster 7.3-2
http://docs.hp.com/en/12700/SPDClusters.pdf
And if you're not already aware of this detail, no cluster communications occur between hosts via (shared) SCSI bus; SCSI is a cluster storage bus and not a cluster communications bus. Even with multi-host SCSI configurations, you must have a cluster communications bus. An Ethernet network can (and often does) fulfill that clustering communications requirement.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 10:57 AM
06-26-2009 10:57 AM
Re: OpenVMS Cluster 7.3-2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 10:59 AM
06-26-2009 10:59 AM
Re: OpenVMS Cluster 7.3-2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 11:09 AM
06-26-2009 11:09 AM
Re: OpenVMS Cluster 7.3-2
I concur with Hoff, the SPD does not mention that the KZPCA can be used in a multi-host configuration.
Read the SPD that Hoff referenced in detail.
On the other hand, the swapping the controllers is not particularly a complex project.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 11:42 AM
06-26-2009 11:42 AM
Re: OpenVMS Cluster 7.3-2
And you can't have a shared-bus configuration here.
If I were going to pursue this, I'd get GbE NICs and a pair of supported multi-host SCSI controllers.
Memory Channel? That works, that works well for certain cluster communications loads and not others. Not my choice of interconnect though, save for specific loads. (Do ensure you have current ECOs here, particularly if you head toward MC.) GbE does very well against MC, too;
Verrell Boaen had some detailed (CPU and latency and performance) comparisons of these interconnects over the years, but I don't have a copy handy. Somebody at HP may well have a copy stashed away.
That you even have memory channel widgets around implies you have a fairly extensive and well-stocked spare parts bin around. Which is where I'd look for a multi-host SCSI controller and GbE NICs, if you don't already have same.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 11:45 AM
06-26-2009 11:45 AM
Re: OpenVMS Cluster 7.3-2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 11:52 AM
06-26-2009 11:52 AM
Re: OpenVMS Cluster 7.3-2
Do be careful about your quorum. In a two node cluster, without a dual host accessible storage unit, the quorum disk will be directly connected to one machine, with no potential alternate connection.
If the machine without a direct connection fails, the other system will remain. If the machine with the cluster quorum disk fails, the other node will fail also.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 12:13 PM
06-26-2009 12:13 PM
Re: OpenVMS Cluster 7.3-2
If the OpenVMS host box for a single-path quorum disk is down, then the quorum disk is (also) down.
To contribute votes, a quorum disk is best configured with non-served direct paths from two or more hosts.
Here's a low-end cluster tutorial:
http://labs.hoffmanlabs.com/node/569
And as Bob mentions, you do have some choices about votes and quorum and other such details to make here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-26-2009 06:18 PM
06-26-2009 06:18 PM
Re: OpenVMS Cluster 7.3-2
it a cluster on the ethernet.
NISCS_Load_PEA0 must be set to one.
With no shared disks, there is absolutely
no point in a quorum disk. If you can
get a workstation on the network it could
be the deciding vote.
Otherwise, decide on the most important system
give it one vote, expected votes 1 and that system must be up. The other would get no votes.
It you add a workstation, each node could get 1 vote, and any 2 votes would keep the cluster happy. But since you are not sharing any data, it probably is best to simplify things and the one that has the critical data should be the only voting member.
Also, it knows which cluster it is part of
with cluster_authorization.dat
It consists of a cluster group number,
WHICH MUST BE UNIQUE IN YOUR NETWORK,
and a password.
The cluster_authorization.dat is what tells
what cluster a node belongs to and allows
multiple clusters on the same network.
You can copy the file cluster_authorization.dat over to the other system if you don't remember the password.
If another cluster in your network has the same group number, you'll get thousands of network errors.
The cluster group number and password is initially created when you run cluster_authorization.dat
or MCR SYSMAN>
help set cluster_authorization
for the syntax.
finally, no disks in the cluster, regardless if they are only seen by one system must
have unique volume names.
Now, you can serve the disks on one node
over the network to the other node in the
cluster. Obviously that's slow.
You mentioned 2 Ethernet controllers. It would be great to connect the two node
either through a controller or simply a turnaround connector. Then give it a higher
priority than the other controller.
What advantage does clustering these nodes
give you?
Bob Comarow