Operating System - OpenVMS
1752782 Members
6196 Online
108789 Solutions
New Discussion юеВ

Re: A question of clustering

 
Brian Reiter
Valued Contributor

A question of clustering

Hi Folks,

Currently we support a number of system running under OpenVMS 7-3-2 with a number of installations around the UK. It is likely that the existing Alpha systems will exist for some time to come, and it is almost certain that these machines will not be upgraded to OpenVMS 8-3 or higher.

Work is also ongoing to port the systems to OpenVMS 8-3-1 on Itanium and maintain (for some time to come) both flavours of the systems. It is expected that a common code base will be maintained (CMS etc.).

So the question becomes how best to maintain the common code base. Can we host the CMS libraries on an NFS share for example? (I seem to remember reading somewhere that this is not recommended)

Could we cluster reliably between Alpha 7-3-2 and Itanium 8-3-1? (that configuration is not warranted and as stated we're unlikely to migrate)



Thanks in advance for you help


Brian
15 REPLIES 15
Hoff
Honored Contributor

Re: A question of clustering

You're really painted in a corner, and you have one OK choice and a bunch of bad ones.

The Clustering matrix has V7.3-2 and V8.3 as migration support, which means it mostly works but you can't raise a stink if it fails. (I don't see an V8.3-1H1 SPD off-hand, so I can't see if that's also migration support.)

CMS won't work on NFS-served disks.

CMS is old and doesn't have an OpenVMS client; a migration to mercurial or SVN or such can be an effort, but can be worth it. I'm not aware of a tool that ports CMS into a newer source code control tool. I don't know if HP plans to add a CMS client.

I'd probably punt on keeping anything on the older boxes here, and would keep your source code and builds and such all on a newer box, and use testing boxes and target practice boxes. To treat the old boxes and old versions and old stuff as if it were embedded, and don't mess with it. Get the boxes set up for remote management, etc.

I'd also start looking at where you want to end up in one and five years; what sorts of upgrades and migrations and ports and such are needed or planned, and how to get there. That you can't get these upgrades deployed points to various flaws in the current scheme; there's a cost to staying on back releases, it's fairly static and then it starts getting expensive (when "something" happens) and you're going to be paying that.
Ian Miller.
Honored Contributor

Re: A question of clustering

Cluster your development systems (Alpha V7.3-2 and Itanium V8.3-1H1 - listed as supported for migration purposes), keep the CMS libraries on cluster shared storage, and do the development on V8.3-1H1. Build the application on both systems regularly to make sure you've not broken anything.

You will need test and production systems which are not clustered.

V7.3-2 is on long term support and seems to be a popular version to freeze at. The cost of maintaining frozen systems increases over time - sometimes you don't notice until they break.
____________________
Purely Personal Opinion
Robert Gezelter
Honored Contributor

Re: A question of clustering

Brian,

As Hoff noted, there is a difference between "will probably work", and "supported".

I do not have the time to run the tests, but it might be worthwhile to see if CMS can run with files that are transparently accessible using DECnet Transparent File Access by way of RMS. In the past, I have used this to access files on remotely connected systems in other contexts. Since I have not tried this with CMS, YMMV, but it is worth a check.

Since this uses DECnet, not clustering, it would then be possible to maintain the 7.3-2 systems as their own cluster, obviating the cross-version issues.

- Bob Gezelter, http://www.rlgsc.com
Volker Halle
Honored Contributor

Re: A question of clustering

Bob,

AXPVMS $ cms set lib i64vms::dsa64:
%CMS-E-NOREF, error referencing I64VMS::DSA64:
-CMS-E-NETNOTALL, network access not allowed
%CMS-W-UNDEFLIB, library is undefined

Volker.
Colin Butcher
Esteemed Contributor

Re: A question of clustering

Why not separate the problem of compile / link / kitbuild etc. on a specific target platform from managing the code base with CMS? Do the CMS work on one central machine (or cluster), yet do all the other work on target platform machines. That avoids the need to closely couple the systems and makes it easy to support multiple projects and multiple targets from a single source library.

It's an easy enough technique which I've used a lot. Typically I have a single CMS server (or development / source cluster) and give access to the extracted sources from the target environments using NFS, or DECdfs, or clustering, or even just by copying the source file set back and forth.

Have a central CMS repository and do all your CMS activity there. When you need to do a compile / link / kitbuilt on a specific target, extract all the sources from CMS on the central CMS repository system.

Now all you need is to give network file access to the sources from the target system. You can do that with NFS, or DECdfs, or maybe even just copy the resulting set of files as a zipped backup saveset. Now you can do the compile / link / kitbuild on the target environment using a common set of sources.

You can use a similar mechanism to have write access to the per developer sources from any of the target environments. That way the developers can work on the code base whichever machine they're on, then submit changes back to the central CMS repository by copying files to the appropriate area on the central CMS machine and submitting the updates onto CMS on that central machine. By using something like DECdfs or NFS you retain access to the sources for debugging too.

Sure, it takes a bit of planning and thinking about, plus a bit of ingenious DCL to make life easy for the developers and librarians, but can work very well. It's worked for me in a number of 40+ developer projects with a set of per-project central CMS repositories and several projects with multiple target systems each all running in parallel. All the compile / link / kitbuild (we delivered as VMSINSTAL or PCSI kits) happens on the specific target environments. Lets you deal with different OS versions etc. too so that you can plan ahead for upgrades to the production systems and support a wide range of target machines.

The same concept works quite nicely with other code management products with code libraries on platforms other than VMS and developers editing on whatever their preferred platform is.

Cheers, Colin (http://www.xdelta.co.uk).
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
Steve Reece_3
Trusted Contributor

Re: A question of clustering

I'd probably look at whether a DECdfs disk is supported with CMS and, if so, look at serving the disks from either environment to the other.

I guess it's a moot point whether you're migrating or not. You expect the environment to be around for a while (not migration) but you are technically migrating from Alpha to Integrity so that might make it migration. HP will probably say that it's not migration since that saves them having to deal with any sticky issues (and, in their shoes, I'd probably do the same!)
Hoff
Honored Contributor

Re: A question of clustering

Colin has a good general scheme, albeit using home-grown CMS and DCL sequences to reinvent what git and mercurial will provide. An "hg clone" is far beyond what CMS can do.
Colin Butcher
Esteemed Contributor

Re: A question of clustering

CMS won't work on a network served disc, be is NFS, DECdfs, CIFS or whatever. Why not? Well, CMS uses things like ACLS and maintains a datbase with shared read/write access to the records within the database. That access has to be co-ordinated by the operating system.

DECdfs serving, for example, doesn't do shared read / write record level locking. Why not?

Well, it's all to do with co-ordinating access to the records within the from the lock manager. That works in the context of the single authentication / lock management domain that is a single node or a cluster. It can't work where systems accessing the files are not part of that authentication / lock management domain.

It probably could if someone modified the lock manager (and all the XFC caching and almost everything else in the file system IO path) to work across multiple non-clustered nodes, but I don't see that happening!

It's also one of the underlying reasons why things like NFS and CIFS (and even DECdfs in a way, but it's not mapping UNIX style UIDS/GIDS, just using DECnet proxies) have to map users on one node to users on the 'server node (or cluster).

Cheers, Colin (http://www.xdelta.co.uk).
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
Robert Gezelter
Honored Contributor

Re: A question of clustering

Volker, Colin, and Hoff,

I stand corrected. As noted in my post, I had not tried it, but wanted it checked as a possibility.

- Bob Gezelter, http://www.rlgsc.com