Operating System - OpenVMS
1828020 Members
1781 Online
109973 Solutions
New Discussion

A question of clustering

 
Brian Reiter
Valued Contributor

A question of clustering

Hi Folks,

Currently we support a number of system running under OpenVMS 7-3-2 with a number of installations around the UK. It is likely that the existing Alpha systems will exist for some time to come, and it is almost certain that these machines will not be upgraded to OpenVMS 8-3 or higher.

Work is also ongoing to port the systems to OpenVMS 8-3-1 on Itanium and maintain (for some time to come) both flavours of the systems. It is expected that a common code base will be maintained (CMS etc.).

So the question becomes how best to maintain the common code base. Can we host the CMS libraries on an NFS share for example? (I seem to remember reading somewhere that this is not recommended)

Could we cluster reliably between Alpha 7-3-2 and Itanium 8-3-1? (that configuration is not warranted and as stated we're unlikely to migrate)



Thanks in advance for you help


Brian
15 REPLIES 15
Hoff
Honored Contributor

Re: A question of clustering

You're really painted in a corner, and you have one OK choice and a bunch of bad ones.

The Clustering matrix has V7.3-2 and V8.3 as migration support, which means it mostly works but you can't raise a stink if it fails. (I don't see an V8.3-1H1 SPD off-hand, so I can't see if that's also migration support.)

CMS won't work on NFS-served disks.

CMS is old and doesn't have an OpenVMS client; a migration to mercurial or SVN or such can be an effort, but can be worth it. I'm not aware of a tool that ports CMS into a newer source code control tool. I don't know if HP plans to add a CMS client.

I'd probably punt on keeping anything on the older boxes here, and would keep your source code and builds and such all on a newer box, and use testing boxes and target practice boxes. To treat the old boxes and old versions and old stuff as if it were embedded, and don't mess with it. Get the boxes set up for remote management, etc.

I'd also start looking at where you want to end up in one and five years; what sorts of upgrades and migrations and ports and such are needed or planned, and how to get there. That you can't get these upgrades deployed points to various flaws in the current scheme; there's a cost to staying on back releases, it's fairly static and then it starts getting expensive (when "something" happens) and you're going to be paying that.
Ian Miller.
Honored Contributor

Re: A question of clustering

Cluster your development systems (Alpha V7.3-2 and Itanium V8.3-1H1 - listed as supported for migration purposes), keep the CMS libraries on cluster shared storage, and do the development on V8.3-1H1. Build the application on both systems regularly to make sure you've not broken anything.

You will need test and production systems which are not clustered.

V7.3-2 is on long term support and seems to be a popular version to freeze at. The cost of maintaining frozen systems increases over time - sometimes you don't notice until they break.
____________________
Purely Personal Opinion
Robert Gezelter
Honored Contributor

Re: A question of clustering

Brian,

As Hoff noted, there is a difference between "will probably work", and "supported".

I do not have the time to run the tests, but it might be worthwhile to see if CMS can run with files that are transparently accessible using DECnet Transparent File Access by way of RMS. In the past, I have used this to access files on remotely connected systems in other contexts. Since I have not tried this with CMS, YMMV, but it is worth a check.

Since this uses DECnet, not clustering, it would then be possible to maintain the 7.3-2 systems as their own cluster, obviating the cross-version issues.

- Bob Gezelter, http://www.rlgsc.com
Volker Halle
Honored Contributor

Re: A question of clustering

Bob,

AXPVMS $ cms set lib i64vms::dsa64:
%CMS-E-NOREF, error referencing I64VMS::DSA64:
-CMS-E-NETNOTALL, network access not allowed
%CMS-W-UNDEFLIB, library is undefined

Volker.
Colin Butcher
Esteemed Contributor

Re: A question of clustering

Why not separate the problem of compile / link / kitbuild etc. on a specific target platform from managing the code base with CMS? Do the CMS work on one central machine (or cluster), yet do all the other work on target platform machines. That avoids the need to closely couple the systems and makes it easy to support multiple projects and multiple targets from a single source library.

It's an easy enough technique which I've used a lot. Typically I have a single CMS server (or development / source cluster) and give access to the extracted sources from the target environments using NFS, or DECdfs, or clustering, or even just by copying the source file set back and forth.

Have a central CMS repository and do all your CMS activity there. When you need to do a compile / link / kitbuilt on a specific target, extract all the sources from CMS on the central CMS repository system.

Now all you need is to give network file access to the sources from the target system. You can do that with NFS, or DECdfs, or maybe even just copy the resulting set of files as a zipped backup saveset. Now you can do the compile / link / kitbuild on the target environment using a common set of sources.

You can use a similar mechanism to have write access to the per developer sources from any of the target environments. That way the developers can work on the code base whichever machine they're on, then submit changes back to the central CMS repository by copying files to the appropriate area on the central CMS machine and submitting the updates onto CMS on that central machine. By using something like DECdfs or NFS you retain access to the sources for debugging too.

Sure, it takes a bit of planning and thinking about, plus a bit of ingenious DCL to make life easy for the developers and librarians, but can work very well. It's worked for me in a number of 40+ developer projects with a set of per-project central CMS repositories and several projects with multiple target systems each all running in parallel. All the compile / link / kitbuild (we delivered as VMSINSTAL or PCSI kits) happens on the specific target environments. Lets you deal with different OS versions etc. too so that you can plan ahead for upgrades to the production systems and support a wide range of target machines.

The same concept works quite nicely with other code management products with code libraries on platforms other than VMS and developers editing on whatever their preferred platform is.

Cheers, Colin (http://www.xdelta.co.uk).
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
Steve Reece_3
Trusted Contributor

Re: A question of clustering

I'd probably look at whether a DECdfs disk is supported with CMS and, if so, look at serving the disks from either environment to the other.

I guess it's a moot point whether you're migrating or not. You expect the environment to be around for a while (not migration) but you are technically migrating from Alpha to Integrity so that might make it migration. HP will probably say that it's not migration since that saves them having to deal with any sticky issues (and, in their shoes, I'd probably do the same!)
Hoff
Honored Contributor

Re: A question of clustering

Colin has a good general scheme, albeit using home-grown CMS and DCL sequences to reinvent what git and mercurial will provide. An "hg clone" is far beyond what CMS can do.
Colin Butcher
Esteemed Contributor

Re: A question of clustering

CMS won't work on a network served disc, be is NFS, DECdfs, CIFS or whatever. Why not? Well, CMS uses things like ACLS and maintains a datbase with shared read/write access to the records within the database. That access has to be co-ordinated by the operating system.

DECdfs serving, for example, doesn't do shared read / write record level locking. Why not?

Well, it's all to do with co-ordinating access to the records within the from the lock manager. That works in the context of the single authentication / lock management domain that is a single node or a cluster. It can't work where systems accessing the files are not part of that authentication / lock management domain.

It probably could if someone modified the lock manager (and all the XFC caching and almost everything else in the file system IO path) to work across multiple non-clustered nodes, but I don't see that happening!

It's also one of the underlying reasons why things like NFS and CIFS (and even DECdfs in a way, but it's not mapping UNIX style UIDS/GIDS, just using DECnet proxies) have to map users on one node to users on the 'server node (or cluster).

Cheers, Colin (http://www.xdelta.co.uk).
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
Robert Gezelter
Honored Contributor

Re: A question of clustering

Volker, Colin, and Hoff,

I stand corrected. As noted in my post, I had not tried it, but wanted it checked as a possibility.

- Bob Gezelter, http://www.rlgsc.com
Mike Kier
Valued Contributor

Re: A question of clustering

Is there any reason the CMS Reference Library, if used, cannot reside on NFS or DFS storage? I'd think clustering would be preferable all around, but that would at least allow read-only access to the latest version of each Element in the library - its not great, no Groups or Classes and no access to variants or prior versions, but it could be enough for the OP's needs.
Practice Random Acts of VMS Marketing
Brian Reiter
Valued Contributor

Re: A question of clustering

Hi Folks,

Thanks for your help. It looks as though we'll be trialling the cluster solution in the short term, at least to see what the risks actually are (we have support with HP at the moment). The clustering option does seem to offer us the most benefits at the moment (without unduly affecting the way the engineering staff work).

Currently we use Mike's suggestion (CMS reference copy on an NFS share) but this is not suitable for all our systems.

To clarify, we have over a 100 DS10 and DS15 out in the field at the moment (spares, live, test etc.) and the cost to the customers of a 7-3-2 to 8-3-1 Alpha upgrade is thought to be too high.

Differing customers have different requirements and some may be willing to upgrade the Alphas, some may not, some are willing to upgrade to Itanium. As the Itanium systems are installed, the pool of DS10s and DS15s we use for spares will gradually increase - mitigating to some extent one of our other main fears (DS10 and DS15 spares). Ideally this should have been done 3 or 4 years ago, but this is the problem when dealing with governmental inertia.

Hopefully we'll have out first Itanium system out in the wild some time in March.

Thanks to you all


cheers

Brian
Steve Reece_3
Trusted Contributor

Re: A question of clustering

Good luck with your migrations and beware of those alignment faults!

Steve
Brian Reiter
Valued Contributor

Re: A question of clustering

Hmm, luckily most of the alignment faults have been dealt with (I hope). Some of the software is close to 20/21 years old and is now onto its third hardware platform.
Steve Reece_3
Trusted Contributor

Re: A question of clustering

Good Luck is probably still in order. As I guess you know, alignment faults are way more expensive on Integrity than on Alpha or VAX, despite the problems having been there since VAX days. On Alpha we didn't care because there were many more CPU cycles, the alignment fault wasn't too expensive so who cares?
In a previous job our applicaiton was doing hundreds of thousands of internet transactions a day (peak was just short of half a million when I left.) It wasn't just the data that had to be realigned but also the instructions for the internet drivers. I'm just glad I wasn't the programmer! ;o)
YMWV (that's your mileage WILL vary!)
Steve
Hoff
Honored Contributor

Re: A question of clustering

I like CMS, and used it for many years. Then I started using git and mercurial (Hg) because of some of the projects I was working with. CMS is really limited in some of its capabilities, and you're right in the middle of one of the weaker areas in the product; it's just not good at distributed environments.

If dragging your environment out of CMS and into Hg isn't feasible (whether done incrementally as part of moving to a new version, or otherwise), then a potential workaround can be PCSI and installations. Treat your software environment as an installation. Use the generations, too.

I've driven the PCSI creation right off the source system, too.