Operating System - OpenVMS
1752681 Members
5609 Online
108789 Solutions
New Discussion юеВ

Re: Delaying DCL Commands

 
Joseph Huber_1
Honored Contributor

Re: Delaying DCL Commands

David,
what would a LIB$SUBMIT("options&parameters") offer over
LIB$SPAWN("SUBMIT options&parameters"),
except speed (it would not have to create a sub-process) ?
http://www.mpp.mpg.de/~huber
Hoff
Honored Contributor

Re: Delaying DCL Commands

There are two threads here, a generic approach to the batch sequencing discussion, and a parallel thread specific to the cleanup and the shutdown processing.

Existing job managers include a cron port and kronos tool, and commercial process management tools. Process control and sequencing, and operator notification and control, are all comparatively weak.

Regardless, new routines such as lib$submit and lib$copy and, yes, a job manager would all be nice, yes.
Hein van den Heuvel
Honored Contributor

Re: Delaying DCL Commands

Let's hear back from Dev Shah first ?!

Joseph,

David picked up on an item I slipped in.

Indeed the purpose of a LIB$SUBMIT would
1) performance / resource usage reduction by avoiding the SPAWN

But also
2) flexibility as a SPAWN requires a 'normal' full process to be able to be used
3) Reliability. Fewer moving parts
4) Remove clutter from Accounting.

Hein
David Jones_21
Trusted Contributor

Re: Delaying DCL Commands

The main advantage to LIB$SUBMIT would be it is easier to get information back, such as the queue name and entry number.

Hein, to parse the user command, you'd want to use CLI$DCL_PARSE, so you'd still need a 'full' process with DCL.
I'm looking for marbles all day long.
John Gillings
Honored Contributor

Re: Delaying DCL Commands

The trouble with the proposed LIB$SUBMIT is it would need a rather complex syntax to specify all the possible options. By the time you had a workable routine, it would be as complex as $SNDJBC, so what's the gain? Item lists may seem daunting, but they're really not that difficult.

Parsing a DCL SUBMIT command would have to be done using command tables, to prevent maintenance and divergence issues - SUBMIT is a non-trivial command. But that means you'd have the restriction of having to have a CLI. It also means the code required to build item lists is replaced by (possibly equally complex) code constructing a command string.

>The main advantage to LIB$SUBMIT would be
>it is easier to get information back,
>such as the queue name and entry number.

How? With an item list?

>1) performance / resource usage reduction
>by avoiding the SPAWN

I could see this argument on a VAX750 when SPAWN really was expensive. But you really going to be doing that many SUBMITs? These days you'd be lucky to win back the time it takes you to code a more complex mechanism to avoid a SPAWN.

A simpler LIB$SUBMIT might have value, but most languages, including DCL, already have a simplistic way to SUBMIT a file to the default batch queue. Use the RMS "DISPOSE=SUBMIT" option.

In DCL it looks like this:

$ OPEN/READ cmd "MYPROC.COM"
$ CLOSE/DISPOSITION=SUBMIT cmd

You can even submit a job to a queue on a remote system using DECnet. See the docs for your favourite langauge for the exact syntax.

There are lots of other higher priority things that scarce VMS engineering resources could be spent on.
A crucible of informative mistakes
Dev Shah_1
New Member

Re: Delaying DCL Commands

Before I begin, I want to apologize for not responding sooner. I have been on vacation the past few days and it seems that update notifications are not being sent to my email properly, otherwise I would have responded sooner.

Secondly, I want to thank all of you for sharing your knowledge with me. I've read through all your wondeful suggetions, and had some discussion points of my own.


1. I understand that cleaning files at startup has many advantages. However, the material in question is highly sensitive (user profiles & user log files) and CANNOT be left on the disks after shutdown for multiple security reasons.

2. I agree with David that the LIB$SUBMIT is way easier to get information back, which is why the SYNCHRONIZE command (THANK YOU HEIN) is wonderful for submiting jobs on the host node. However, it does not (or I haven't figured out how to make it) work with SPAWN. The code uses a "$SPAWN/NOWAIT SUBMIT/REMOTE" to submit the JOB to remote nodes for which I can't use SYNCHRONIZE with... somebody please correct me if I'm wrong.

3. I really like the SYSMAN idea. However, I'm a little confused by peoples responses. Bob said that SYSMAN would be completely synchronous, in which case, why do I still need a job queue equal to 1? Also, I haven't looked at the code for SYSMAN but in the case of shutdown, is the "make sure you shut me down last" apparent? Also, the "do delete *.txt" command seems a little too simple for what I'm trying to do. Does Sysman allow "do delete.com" ?

Note: I still have to talk to my people above my paygrade to see whether or not, we can eve submit a batch job on a different queue then SYS$BATCH.

Also, to John Gillings... using RSH/SSH is brilliant. I haven't yet, but I will be taking a look at the code. I was just impressed with the out of the box thinking.

Once again, thank you all for your help, I will be paying closer attention the thread now. So, please don't go away. :)

- Dev
GuentherF
Trusted Contributor

Re: Delaying DCL Commands

I might be missing something. What about submitting a.com which does the cleanup and in a.com submit b.com (or sysman) to the node specific execution queues?

/Guenther
Hoff
Honored Contributor

Re: Delaying DCL Commands

It can be inferred that other critical details have been omitted here.

Why? If the data here is sufficiently sensitive that a wait through to a reboot is a design consideration, then that data likely needs to be expunged just as soon as it is no longer required by the application. In the application. Not at shutdown.

And data that is sufficiently sensitive to exposure (also) needs to be encrypted. And application and configuration management also becomes a factor.

And it is likely you know this, which then implies there are application-specific considerations here that are leading you to, bluntly, retrofit hacks to paper over security flaws.

Yet we're looking at after-the-fact DCL hackery.

So.

What (other) details are being omitted? Are the disks encrypted? Are the applications periodically audited and checked for integrity? Does the site have access to and have the tools necessary to make changes to the applications, or is this vendor-provided closed-source software?

This really smells like a nasty problem that some auditor noticed and didn't fully understand; that there's a rather more fundamental design bug lurking here.