Operating System - OpenVMS
1827807 Members
2965 Online
109969 Solutions
New Discussion

Re: Delaying DCL Commands

 
Dev Shah_1
New Member

Delaying DCL Commands

First, let me apologize, I spent time searching for this question, but could not find it, so I'm starting a new thread.

I have a program that I want to do the following things: Execute File_A.com, Wait till File_A.com is finished, and then Execute File_B.com

Currently the program does a spawn/nowait submit File_B.com to remote notes and then a submit File_B.com to itself. This sends a command to all remote nodes to shutdown and then gives the host node the same command.

I want to execute a File_A.com (which deletes the temporary folders... which takes anywhere from 1-2 minutes) to remote nodes, wait for it to finish deleting before submitting File_B.com.

I've tried using spawn/wait and different forms of submit to no success. Basically, it submits File_A.com to run, but submits File_B.com at the same time and just shuts the computer down before anything gets deleted. I'm new to DCL.

Is there anyway I can easily code this without hard coding a WAIT 00:02:00 into the code?

I want to thank everyone in advance for their help.

- Dev
17 REPLIES 17
Hein van den Heuvel
Honored Contributor

Re: Delaying DCL Commands


Check out the $ SYNC command.

Also, you may want to call the job controller directory to submit a job, but admittedly a LIB$SPAWN call is much easier.

Hmmm... it woudl perhpas be nice to have a LIB$SUBMIT kind of call which takes a SUBMIT or PRINT string as one would use at DCL level and parse it into a corresponding SYS$SNDJBC call.

Does anyone have anything handy? Freeware?


Hein
Hoff
Honored Contributor

Re: Delaying DCL Commands

Implement your application clean-up sequence within the SYSHUTDWN.COM procedure and use SYSMAN to send a SHUTDOWN to all hosts?

Better still, implement your application clean-up sequence in SYSTARTUP_VMS.COM procedure, and then use SYSMAN to send the SHUTDOWN command to all hosts.

Graham Burley
Frequent Advisor

Re: Delaying DCL Commands

Either create a C.COM that runs A.COM & B.COM and submit that, or submit the jobs to a queue with a JOB_LIMIT of 1 ?

Steve Reece_3
Trusted Contributor

Re: Delaying DCL Commands

Hi Dev,

Welcome to DCL.

What I'd do in this case is code what I want into a command procedure which would call SYSMAN.

SYSMAN is able to do things on remote nodes. It requires a password for the remote nodes if they're not in your present login environment (i.e. it shouldn't ask in a cluster, but if you're connecting to another node that's not clustered with your present one then it will ask for a password.)

You'd need to do something like this:

$ MC SYSMAN
set environment/node=(nodea,nodeb,nodec,noded)
do delete disk$user:[mydirectory]temp*.txt
shutdown node/min=0/reboot_check/invoke/noauto
$ exit

If I've remembered my syntax right, the /noauto should prevent the node from automatically rebooting - i.e. it will shutdown, not reboot.
/min=0 says, "do it now"
/reboot_check tells the systems to check that they have the basic expected files that VMS needs to reboot this system. It doesn't check for user files so doesn't guarantee that the system will startup correctly, just does a best effort that VMS will start.

Alternatively, submit jobs to a batch queue with a job limit of one so that the shutdown job will have to wait behind the delete job.
Robert Gezelter
Honored Contributor

Re: Delaying DCL Commands

Dev,

I too would use SYSMAN's DO command, following a SET ENVIRONMENT.

A batch queue on each node would leave open a variety of race conditions if the other node were shut down or shutting down. SYSMAN would be entirely synchronous, and the request would disappear in those cases.

As a safety, the clean-out step should also be included in the application's STARTUP processing on system reboot.

- Bob Gezelter, http://www.rlgsc.com
Hoff
Honored Contributor

Re: Delaying DCL Commands

Application cleanup during shutdown catches a few cases; those that involve a clean shutdown.

Cleanup at startup time catches all the cases.

If you have restrictive startup time constraints, then look to move the scratch directory (directories?) and cache directories to the side via RENAME and create and start with a new scratch directory at startup, and then clean up the older scratch directories once the application is running and has some cycles available for the task.

Yet better, that performs periodic cleanup of the caches or scratch files as some OpenVMS application environments can operate continuously for days or weeks and can (depending on the application) build up cruft or file versions at run-time.

But shutdown is not particularly effective.

I'd use the site-specific SYSTARTUP_VMS.COM procedure or the application startup procedure (mumble_STARTUP.COM, usually), or directly integrate the file management tasks into the application.
John Gillings
Honored Contributor

Re: Delaying DCL Commands

Dev,

You're correct to avoid the simplistic "wait 2:00" method. It's bound to fail at some time.

As others have suggested, batch jobs and SYNCHRONIZE or SYSMAN are definite possibilities. You can also use RSH or SSH, which are synchronous.

Another option is to use the lock manager. If you're V8.3 or higher, you can use a shared indexed file and READ/WAIT to implement a dead man lock.

So your procedure A.COM locks the record, when it's finished it releases the record. B.COM can READ/WAIT the same record. This scales nicely for multiple nodes, using the node name as part of the record key. This scales to multiple nodes.

I've included a procedure that automatically distributes cleanup and shutdown across a cluster, using RMS locks. Note that it assumes TCPIP proxies across the cluster to use RSH. Because RSH is synchronous, it's not actually necessary to use the locks. If you want to make the jobs on each node execute in parallel, modify the procedure to run an asynch process on the remote node, then you will need the locking.
A crucible of informative mistakes
John McL
Trusted Contributor

Re: Delaying DCL Commands

Why are you telling other nodes in the cluster to delete files when you could probably directly do that from the node you are on?

If you do that then it's just a cluster shutdown from the node where you are.

Further, what's your reason for deleting these files prior to shutdown?

There could be several different reasons but I suggest that deleting during startup (perhaps via batch job) could be an easier option and maybe deleting them after startup might give you the option of reviewing them before they are deleted.
David Jones_21
Trusted Contributor

Re: Delaying DCL Commands

"Hmmm... it would perhaps be nice to have a LIB$SUBMIT kind of call which takes a SUBMIT or PRINT string as one would use at DCL level and parse it into a corresponding SYS$SNDJBC call.

Does anyone have anything handy? Freeware?"

23 years ago, before queues supported ACLs, I wrote a substitute print command that effectively did that. It's pretty straight forward to map the job/file attribute qualfiers to SJC item codes, but you have to process wildcard file specifications and the selection criteria (/since, /before, etc) qualifers yourself.
I'm looking for marbles all day long.
Joseph Huber_1
Honored Contributor

Re: Delaying DCL Commands

David,
what would a LIB$SUBMIT("options&parameters") offer over
LIB$SPAWN("SUBMIT options&parameters"),
except speed (it would not have to create a sub-process) ?
http://www.mpp.mpg.de/~huber
Hoff
Honored Contributor

Re: Delaying DCL Commands

There are two threads here, a generic approach to the batch sequencing discussion, and a parallel thread specific to the cleanup and the shutdown processing.

Existing job managers include a cron port and kronos tool, and commercial process management tools. Process control and sequencing, and operator notification and control, are all comparatively weak.

Regardless, new routines such as lib$submit and lib$copy and, yes, a job manager would all be nice, yes.
Hein van den Heuvel
Honored Contributor

Re: Delaying DCL Commands

Let's hear back from Dev Shah first ?!

Joseph,

David picked up on an item I slipped in.

Indeed the purpose of a LIB$SUBMIT would
1) performance / resource usage reduction by avoiding the SPAWN

But also
2) flexibility as a SPAWN requires a 'normal' full process to be able to be used
3) Reliability. Fewer moving parts
4) Remove clutter from Accounting.

Hein
David Jones_21
Trusted Contributor

Re: Delaying DCL Commands

The main advantage to LIB$SUBMIT would be it is easier to get information back, such as the queue name and entry number.

Hein, to parse the user command, you'd want to use CLI$DCL_PARSE, so you'd still need a 'full' process with DCL.
I'm looking for marbles all day long.
John Gillings
Honored Contributor

Re: Delaying DCL Commands

The trouble with the proposed LIB$SUBMIT is it would need a rather complex syntax to specify all the possible options. By the time you had a workable routine, it would be as complex as $SNDJBC, so what's the gain? Item lists may seem daunting, but they're really not that difficult.

Parsing a DCL SUBMIT command would have to be done using command tables, to prevent maintenance and divergence issues - SUBMIT is a non-trivial command. But that means you'd have the restriction of having to have a CLI. It also means the code required to build item lists is replaced by (possibly equally complex) code constructing a command string.

>The main advantage to LIB$SUBMIT would be
>it is easier to get information back,
>such as the queue name and entry number.

How? With an item list?

>1) performance / resource usage reduction
>by avoiding the SPAWN

I could see this argument on a VAX750 when SPAWN really was expensive. But you really going to be doing that many SUBMITs? These days you'd be lucky to win back the time it takes you to code a more complex mechanism to avoid a SPAWN.

A simpler LIB$SUBMIT might have value, but most languages, including DCL, already have a simplistic way to SUBMIT a file to the default batch queue. Use the RMS "DISPOSE=SUBMIT" option.

In DCL it looks like this:

$ OPEN/READ cmd "MYPROC.COM"
$ CLOSE/DISPOSITION=SUBMIT cmd

You can even submit a job to a queue on a remote system using DECnet. See the docs for your favourite langauge for the exact syntax.

There are lots of other higher priority things that scarce VMS engineering resources could be spent on.
A crucible of informative mistakes
Dev Shah_1
New Member

Re: Delaying DCL Commands

Before I begin, I want to apologize for not responding sooner. I have been on vacation the past few days and it seems that update notifications are not being sent to my email properly, otherwise I would have responded sooner.

Secondly, I want to thank all of you for sharing your knowledge with me. I've read through all your wondeful suggetions, and had some discussion points of my own.


1. I understand that cleaning files at startup has many advantages. However, the material in question is highly sensitive (user profiles & user log files) and CANNOT be left on the disks after shutdown for multiple security reasons.

2. I agree with David that the LIB$SUBMIT is way easier to get information back, which is why the SYNCHRONIZE command (THANK YOU HEIN) is wonderful for submiting jobs on the host node. However, it does not (or I haven't figured out how to make it) work with SPAWN. The code uses a "$SPAWN/NOWAIT SUBMIT/REMOTE" to submit the JOB to remote nodes for which I can't use SYNCHRONIZE with... somebody please correct me if I'm wrong.

3. I really like the SYSMAN idea. However, I'm a little confused by peoples responses. Bob said that SYSMAN would be completely synchronous, in which case, why do I still need a job queue equal to 1? Also, I haven't looked at the code for SYSMAN but in the case of shutdown, is the "make sure you shut me down last" apparent? Also, the "do delete *.txt" command seems a little too simple for what I'm trying to do. Does Sysman allow "do delete.com" ?

Note: I still have to talk to my people above my paygrade to see whether or not, we can eve submit a batch job on a different queue then SYS$BATCH.

Also, to John Gillings... using RSH/SSH is brilliant. I haven't yet, but I will be taking a look at the code. I was just impressed with the out of the box thinking.

Once again, thank you all for your help, I will be paying closer attention the thread now. So, please don't go away. :)

- Dev
GuentherF
Trusted Contributor

Re: Delaying DCL Commands

I might be missing something. What about submitting a.com which does the cleanup and in a.com submit b.com (or sysman) to the node specific execution queues?

/Guenther
Hoff
Honored Contributor

Re: Delaying DCL Commands

It can be inferred that other critical details have been omitted here.

Why? If the data here is sufficiently sensitive that a wait through to a reboot is a design consideration, then that data likely needs to be expunged just as soon as it is no longer required by the application. In the application. Not at shutdown.

And data that is sufficiently sensitive to exposure (also) needs to be encrypted. And application and configuration management also becomes a factor.

And it is likely you know this, which then implies there are application-specific considerations here that are leading you to, bluntly, retrofit hacks to paper over security flaws.

Yet we're looking at after-the-fact DCL hackery.

So.

What (other) details are being omitted? Are the disks encrypted? Are the applications periodically audited and checked for integrity? Does the site have access to and have the tools necessary to make changes to the applications, or is this vendor-provided closed-source software?

This really smells like a nasty problem that some auditor noticed and didn't fully understand; that there's a rather more fundamental design bug lurking here.