1822816 Members
4270 Online
109645 Solutions
New Discussion юеВ

WEBES install and VMS

 
rman
Advisor

WEBES install and VMS

Ok. We have finally installed WEBES v5.1 on an Itanium system running VMS v8.2-1, it installed but it did not install in the Common directory instead it install in the specific area. This customer swings the disk between production and test, so WEBES stops working due to the different root. Any thoughts on how to get WEBES to install in the common area?
21 REPLIES 21
comarow
Trusted Contributor

Re: WEBES install and VMS

You can rename the file to common
Sys$common directory.


Go to the top level directory and work your way
down to the common directory.
rman
Advisor

Re: WEBES install and VMS

Please read my tread in the ISEEWebes area. I believe what you are trying to get me to do will not work because the way WEBES behave but I could be wrong.
Hoff
Honored Contributor

Re: WEBES install and VMS

WEBES installs in the specific area. I never did sort out specific details, but it looked due to file access (locking) conflicts when the various data files are accessed in parallel from multiple nodes. Accordingly, the WEBES devos seem to have trumped this by loading everything into SYS$SPECIFIC:.

What, exactly, do you mean by "swinging the disk" here?

rman
Advisor

Re: WEBES install and VMS

I will have to get a definition of a swing disk from the customer. More or less what they do is; they have a test disk we the new patches already tested then they swing the disk into production but the production systems are different roots because they are in the same cluster, do that make sense?
Hoff
Honored Contributor

Re: WEBES install and VMS

I might guess I know what your customer is up to here, though (if I've guessed correctly) this approach looks mildly hazardous.

There are cases when what's in the particular local root Really Matters, and it looks like the production tests here are missing that case.

Based on empirical observations, I'd also tend to assume that more of this sort of stuff lurking in the cluster-specific root will be increasingly commonly found, as various of the tools and some other pieces tend to use that area far more heavily than has been traditional. (It's an easy way to avoid dealing with shared locking.)

At its simplest, might want to install WEBES into both roots, and you might want to test in both roots.

Might also want to review the test coverage, too, but that's another discussion.
comarow
Trusted Contributor

Re: WEBES install and VMS

Obviously, if it does not use the distributed lock manager, or it root specific, it will have to stay. My advise was based on general movement from specific to common directories.

That is sad as they move away from the distributed lock manager, cluster advantages would normally be automatic.
rman
Advisor

Re: WEBES install and VMS

A swing disk is a disk that is in Test and all the changes get applied to it, after that they do a save set of the sys$roots then they go to the production shadowset and break it ten they take one of this disk and strip all the root specific info and the they lay the save_set roots from the test system and then they boot.
rman
Advisor

Re: WEBES install and VMS

They are using the lockmanager!
Hoff
Honored Contributor

Re: WEBES install and VMS

They are using the lockmanager?

"Swing disk" and "lock manager"? Those two constructs don't immediately map as related.

Can I have an antecedent for "they are using the lockmanager!" comment?

Somebody here is seriously confused.

It might well be me, of course.



rman
Advisor

Re: WEBES install and VMS

I might be missing something but I do not see a correlation between using a swing disk and the lock manager...The swing disk was also used by Intel in the DEC days and it was call the Golden disk, this method have been use by others customers with cluster which I believe required the lock manager.
Hoff
Honored Contributor

Re: WEBES install and VMS

In a typical HP-supported configuration, patches and such are in the system common root, which is common to all hosts using the disk.

In a high-end production environment, the system environment is tested in a parallel configuration, then switched over to the production environment. I'd not tend to expect to see roots switched here, too, as that introduces more complexity and more risk. I mention this here as it appears there is rather more here going on than usual.

{{{{I might be missing something but I do not see a correlation between using a swing disk and the lock manager...}}}

I'm trying to sort out specifically why you're referring to the lock manager in your reply. Yes, the two concepts don't appear particularly related.

{{{{The swing disk was also used by Intel in the DEC days and it was call the Golden disk,}}}

I am aware of folks that have used disks as big network packets. DISMOUNT from one cluster, and MOUNT on another. Seen with DSA-series dual-path RA disks, and also feasible with FC SAN disks. Requiring care regardless, lest disk corruptions ensue.

{{{{ this method have been use by others customers with cluster which I believe required the lock manager.}}}}

If anything, sites that are locating and relocating a disk between disparate clusters are trying to keep both of the lock managers from coordinating (or confusing) the disk, and are specifically looking to avoid get tangled up with WEBES (and node names and other such?) while switching roots, too. There's rather more going on here than I'm aware of, apparently.

You might want to take this installation enhancement request up with HP more directly, as they're the folks that have created WEBES and the current installation kit. In the interim, I'd look to avoid swapping roots while swapping (system) disks among clusters.
EWL
Occasional Advisor

Re: WEBES install and VMS

I'll explain the swing disk since Ramon is asking on our behalf. Because of the nature of our business and the goal of 99.99% uptime, all OS and layered products are patched and tested in a test only cluster. Once they have been determined to be 'stable', a copy of the patched disk is made and only sysroots (sys$specific) are removed. Sys$common is left in tact. The sys$specific stuff is removed because the clusters are located in different areas and obviously have their own network and license stuff. The copy of the patched disk is copied to the production system on a spare disk and the production sys$sysroots are backed up and restored to the new disk. Once everything has been checked out, the systems are shutdown and the new patched disk is made the sys$sysdevice, and the systems are booted. We have used this technique for 10+ years and until WEBES, have had very little issue. WEBES is not following the normal OpenVMS methodolgy for layered products.
Hoff
Honored Contributor

Re: WEBES install and VMS

Thanks for the post here. It has greatly clarified the process and the problem here.

Your (probably likely to be considered unsupported) disk structure processing and system disk production deployment scheme is incompatible with the products and with an environment that uses the local root (WEBES, in this case, but there are other packages around that install in SYS$SPECIFIC); your model conflicts with the way that various recent applications work.

You have my sympathies here.

This is not a fun spot to be stuck.

These tools seem increasingly likely reside in the local root specifically because the code has not been customized to operate in traditional cluster environments; the code is not operating with shared semantics and such on its data files.

Either your design and deployment scheme has to change to reflect this, or HP has to change how these tools install and operate -- this particular tool, and any future versions of these and of any other tools that might follow this local-root model, would have to change.

Nothing you'll receive here in ITRC will likely have any bearing on relief for your environment. You will want to discuss this directly with HP and with HP support and HP business management; with the folks that have responsibility OpenVMS and WEBES and other tools. (AFAIK, the particular HP folks involved in this don't usually post here, and may or may not read this thread.)

There are potential ways to re-engineer the existing deployment model for tasks such as licensing and network addresses and such, should your response from HP point to re-engineering your process rather than WEBES and other tools being cluster-integrated.

Those sorts of potential changes to your model are fodder for other discussions; to bring it closer to how HP tends to approach its cluster system disks. Basically either a form of factory installed software, or a form of on-line tailoring, are the approaches that come immediately to mind here. This can be done using various tools, whether DVD or other such. But I'll leave you with your existing support staff here to sort that out, after your chat with HP.

Stephen Hoffman
HoffmanLabs LLC
EWL
Occasional Advisor

Re: WEBES install and VMS

Thanks Stephen.
Ramon is one of our HP peeps and has listened to us moan and groan. We've tried unsuccessfully for over 18months to get WEBES to work in an OpenVMS environment that makes sense. WEBES was not developed or programmed on OpenVMS but was ported from some other platform. The install hardcodes node names, devices, and sysroots, instead of following the normal OpenVMS conventions. We've pushed all the way up to engineering and were told "No WEBES doesn't follow the OpenVMS standards". This being said, that leaves us on i64 with limited ways to read errorlogs, which in a mission critical environment is unacceptable. Seems like HP is ignoring OpenVMS in favor of one of those four letter word OSs. Sorry, I'll get off the soapbox.

Ramon's and our hope was that someone had run into a similar issue with WEBES and had some work around or hope.

We're even open to installing WEBES off the system disk, but doing that still leave pieces all over the system disk that just cannot be migrated.

If someone has an easier methodology of migrating patches between production and test that limites the amount of downtime in production, we'd love to hear from you.
rman
Advisor

Re: WEBES install and VMS

Thanks all for the input I am still pushing engineering in some kind of a solution. I will keep this tread updated.
Hoff
Honored Contributor

Re: WEBES install and VMS

Potential discussion topics here may not be appropriate inclusion for the ITRC forums.

If you and/or the HP representatives here would be interested in discussing these matters off-line -- whether from the perspective of updates needed within WEBES and related, or of your replication and deployment processing -- feel free to contact me off-line. I'm quite familiar with subject domains and related issues involved here.

Stephen Hoffman
HoffmanLabs LLC

Re: WEBES install and VMS

The issue is NOT the swing disk. This is a tested procedure used by the old Digital, Intel and many other sites under different names.

there are TWO problems with WEBEs.

1. It uses a 281 block .COM file for installation instead of PCSI (or even VMS$INSTALL) This in itself is WRONG...

2. It builds what IT calls "SPECIFIC" directories using the NODE NAMES in the COMMON directory tree. Again Wrong. Then because the so called "SPECIFIC" directories are in the COMMON tree they put a STARTUP FILE in the REAL SYS$SPECIFIC directory for each node. Again, wrong...

The system that follows has the SYS5 as the specific directories.

"SYS$SPECIFIC" = "$1$DGA5169:[SYS5.]" (LNM$SYSTEM_TABLE)


WEBES uses:

"SVCTOOLS_SPECIFIC" = "_$1$DGA5169:[SYS4.SYSCOMMON.HP.NODES.TERMIN.SVCTOOLS.]"

I just want the specific stuff to be put down the REAL specific directories for each node. Then make a common startu and put it in the SYS$STARTUP common tree...

OpenVMS is NOT Windows... OH and fix the installation file ... (make it PCSI).
Hoff
Honored Contributor

Re: WEBES install and VMS

But wait, there's more...

Yes, those are certainly issues here, albeit these two are resolvable in isolation. There tend to be others. The file access patterns further down within these tools tends to be the more central matter here; I'm aware of SYS$SPECIFIC installations that have file access collisions.

Ramon Mandry and the other HP here folks can work this out (or out-source the work to partner firms experienced in cluster-common operations -- HoffmanLabs or such), the disk deployment pattern here can change (and the scheme here is risky in my estimation), or both. I'd tend to expect WEBES won't be the last of these packages, as part of my calculations.

And I'd tend to go toward both goals in parallel; fixing WEBES, and altering the disk deployment patterns.

Re: WEBES install and VMS

The reality of the "swing disk" is that it's simply set up like a six node cluster with three nodes removed for production... Not really very difficult..

BUT again I'm not as concerned with the deployment of the product as I am with the sys$specific and sys$common issues.. Especially the 281 block .COM file
Hoff
Honored Contributor

Re: WEBES install and VMS

There are multiple issues within this thread.

I can think of at least six separate issues here in this thread that would (do) concern me.

Various of these issues are unlikely to be resolved in an ITRC forum thread. Some can (do) appear to fall somewhere between ill-suited and entirely impermissible for further discussions here in ITRC, too.
comarow
Trusted Contributor

Re: WEBES install and VMS

I think the very issues that are being avoided
are the critical ones. WEBES is an important
type of product for VMS, and it should work
like VMS. The swing disk is certainly
normal, but one of a kind command procedures
doing breaking all VMS conventions only
should be avoided. In fact it is this that
takes away the great advantages.

The question is, does anyone care or does it
continue to be a Rube Goldberg contraption?
It can and should be done like other VMS products.