- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Delaying DCL Commands
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-11-2009 07:47 AM
12-11-2009 07:47 AM
Delaying DCL Commands
I have a program that I want to do the following things: Execute File_A.com, Wait till File_A.com is finished, and then Execute File_B.com
Currently the program does a spawn/nowait submit File_B.com to remote notes and then a submit File_B.com to itself. This sends a command to all remote nodes to shutdown and then gives the host node the same command.
I want to execute a File_A.com (which deletes the temporary folders... which takes anywhere from 1-2 minutes) to remote nodes, wait for it to finish deleting before submitting File_B.com.
I've tried using spawn/wait and different forms of submit to no success. Basically, it submits File_A.com to run, but submits File_B.com at the same time and just shuts the computer down before anything gets deleted. I'm new to DCL.
Is there anyway I can easily code this without hard coding a WAIT 00:02:00 into the code?
I want to thank everyone in advance for their help.
- Dev
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-11-2009 08:29 AM
12-11-2009 08:29 AM
Re: Delaying DCL Commands
Check out the $ SYNC command.
Also, you may want to call the job controller directory to submit a job, but admittedly a LIB$SPAWN call is much easier.
Hmmm... it woudl perhpas be nice to have a LIB$SUBMIT kind of call which takes a SUBMIT or PRINT string as one would use at DCL level and parse it into a corresponding SYS$SNDJBC call.
Does anyone have anything handy? Freeware?
Hein
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-11-2009 08:52 AM
12-11-2009 08:52 AM
Re: Delaying DCL Commands
Better still, implement your application clean-up sequence in SYSTARTUP_VMS.COM procedure, and then use SYSMAN to send the SHUTDOWN command to all hosts.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-11-2009 11:09 AM
12-11-2009 11:09 AM
Re: Delaying DCL Commands
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2009 12:41 AM
12-12-2009 12:41 AM
Re: Delaying DCL Commands
Welcome to DCL.
What I'd do in this case is code what I want into a command procedure which would call SYSMAN.
SYSMAN is able to do things on remote nodes. It requires a password for the remote nodes if they're not in your present login environment (i.e. it shouldn't ask in a cluster, but if you're connecting to another node that's not clustered with your present one then it will ask for a password.)
You'd need to do something like this:
$ MC SYSMAN
set environment/node=(nodea,nodeb,nodec,noded)
do delete disk$user:[mydirectory]temp*.txt
shutdown node/min=0/reboot_check/invoke/noauto
$ exit
If I've remembered my syntax right, the /noauto should prevent the node from automatically rebooting - i.e. it will shutdown, not reboot.
/min=0 says, "do it now"
/reboot_check tells the systems to check that they have the basic expected files that VMS needs to reboot this system. It doesn't check for user files so doesn't guarantee that the system will startup correctly, just does a best effort that VMS will start.
Alternatively, submit jobs to a batch queue with a job limit of one so that the shutdown job will have to wait behind the delete job.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2009 06:40 AM
12-12-2009 06:40 AM
Re: Delaying DCL Commands
I too would use SYSMAN's DO command, following a SET ENVIRONMENT.
A batch queue on each node would leave open a variety of race conditions if the other node were shut down or shutting down. SYSMAN would be entirely synchronous, and the request would disappear in those cases.
As a safety, the clean-out step should also be included in the application's STARTUP processing on system reboot.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2009 08:12 AM
12-12-2009 08:12 AM
Re: Delaying DCL Commands
Cleanup at startup time catches all the cases.
If you have restrictive startup time constraints, then look to move the scratch directory (directories?) and cache directories to the side via RENAME and create and start with a new scratch directory at startup, and then clean up the older scratch directories once the application is running and has some cycles available for the task.
Yet better, that performs periodic cleanup of the caches or scratch files as some OpenVMS application environments can operate continuously for days or weeks and can (depending on the application) build up cruft or file versions at run-time.
But shutdown is not particularly effective.
I'd use the site-specific SYSTARTUP_VMS.COM procedure or the application startup procedure (mumble_STARTUP.COM, usually), or directly integrate the file management tasks into the application.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2009 02:25 PM
12-13-2009 02:25 PM
Re: Delaying DCL Commands
You're correct to avoid the simplistic "wait 2:00" method. It's bound to fail at some time.
As others have suggested, batch jobs and SYNCHRONIZE or SYSMAN are definite possibilities. You can also use RSH or SSH, which are synchronous.
Another option is to use the lock manager. If you're V8.3 or higher, you can use a shared indexed file and READ/WAIT to implement a dead man lock.
So your procedure A.COM locks the record, when it's finished it releases the record. B.COM can READ/WAIT the same record. This scales nicely for multiple nodes, using the node name as part of the record key. This scales to multiple nodes.
I've included a procedure that automatically distributes cleanup and shutdown across a cluster, using RMS locks. Note that it assumes TCPIP proxies across the cluster to use RSH. Because RSH is synchronous, it's not actually necessary to use the locks. If you want to make the jobs on each node execute in parallel, modify the procedure to run an asynch process on the remote node, then you will need the locking.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2009 05:57 PM
12-13-2009 05:57 PM
Re: Delaying DCL Commands
If you do that then it's just a cluster shutdown from the node where you are.
Further, what's your reason for deleting these files prior to shutdown?
There could be several different reasons but I suggest that deleting during startup (perhaps via batch job) could be an easier option and maybe deleting them after startup might give you the option of reviewing them before they are deleted.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 05:06 AM
12-14-2009 05:06 AM
Re: Delaying DCL Commands
Does anyone have anything handy? Freeware?"
23 years ago, before queues supported ACLs, I wrote a substitute print command that effectively did that. It's pretty straight forward to map the job/file attribute qualfiers to SJC item codes, but you have to process wildcard file specifications and the selection criteria (/since, /before, etc) qualifers yourself.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 05:22 AM
12-14-2009 05:22 AM
Re: Delaying DCL Commands
what would a LIB$SUBMIT("options¶meters") offer over
LIB$SPAWN("SUBMIT options¶meters"),
except speed (it would not have to create a sub-process) ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 06:03 AM
12-14-2009 06:03 AM
Re: Delaying DCL Commands
Existing job managers include a cron port and kronos tool, and commercial process management tools. Process control and sequencing, and operator notification and control, are all comparatively weak.
Regardless, new routines such as lib$submit and lib$copy and, yes, a job manager would all be nice, yes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 06:53 AM
12-14-2009 06:53 AM
Re: Delaying DCL Commands
Joseph,
David picked up on an item I slipped in.
Indeed the purpose of a LIB$SUBMIT would
1) performance / resource usage reduction by avoiding the SPAWN
But also
2) flexibility as a SPAWN requires a 'normal' full process to be able to be used
3) Reliability. Fewer moving parts
4) Remove clutter from Accounting.
Hein
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 07:44 AM
12-14-2009 07:44 AM
Re: Delaying DCL Commands
Hein, to parse the user command, you'd want to use CLI$DCL_PARSE, so you'd still need a 'full' process with DCL.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 02:04 PM
12-14-2009 02:04 PM
Re: Delaying DCL Commands
Parsing a DCL SUBMIT command would have to be done using command tables, to prevent maintenance and divergence issues - SUBMIT is a non-trivial command. But that means you'd have the restriction of having to have a CLI. It also means the code required to build item lists is replaced by (possibly equally complex) code constructing a command string.
>The main advantage to LIB$SUBMIT would be
>it is easier to get information back,
>such as the queue name and entry number.
How? With an item list?
>1) performance / resource usage reduction
>by avoiding the SPAWN
I could see this argument on a VAX750 when SPAWN really was expensive. But you really going to be doing that many SUBMITs? These days you'd be lucky to win back the time it takes you to code a more complex mechanism to avoid a SPAWN.
A simpler LIB$SUBMIT might have value, but most languages, including DCL, already have a simplistic way to SUBMIT a file to the default batch queue. Use the RMS "DISPOSE=SUBMIT" option.
In DCL it looks like this:
$ OPEN/READ cmd "MYPROC.COM"
$ CLOSE/DISPOSITION=SUBMIT cmd
You can even submit a job to a queue on a remote system using DECnet. See the docs for your favourite langauge for the exact syntax.
There are lots of other higher priority things that scarce VMS engineering resources could be spent on.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-17-2009 10:50 AM
12-17-2009 10:50 AM
Re: Delaying DCL Commands
Secondly, I want to thank all of you for sharing your knowledge with me. I've read through all your wondeful suggetions, and had some discussion points of my own.
1. I understand that cleaning files at startup has many advantages. However, the material in question is highly sensitive (user profiles & user log files) and CANNOT be left on the disks after shutdown for multiple security reasons.
2. I agree with David that the LIB$SUBMIT is way easier to get information back, which is why the SYNCHRONIZE command (THANK YOU HEIN) is wonderful for submiting jobs on the host node. However, it does not (or I haven't figured out how to make it) work with SPAWN. The code uses a "$SPAWN/NOWAIT SUBMIT/REMOTE" to submit the JOB to remote nodes for which I can't use SYNCHRONIZE with... somebody please correct me if I'm wrong.
3. I really like the SYSMAN idea. However, I'm a little confused by peoples responses. Bob said that SYSMAN would be completely synchronous, in which case, why do I still need a job queue equal to 1? Also, I haven't looked at the code for SYSMAN but in the case of shutdown, is the "make sure you shut me down last" apparent? Also, the "do delete *.txt" command seems a little too simple for what I'm trying to do. Does Sysman allow "do delete.com" ?
Note: I still have to talk to my people above my paygrade to see whether or not, we can eve submit a batch job on a different queue then SYS$BATCH.
Also, to John Gillings... using RSH/SSH is brilliant. I haven't yet, but I will be taking a look at the code. I was just impressed with the out of the box thinking.
Once again, thank you all for your help, I will be paying closer attention the thread now. So, please don't go away. :)
- Dev
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-18-2009 07:49 PM
12-18-2009 07:49 PM
Re: Delaying DCL Commands
/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-19-2009 04:57 AM
12-19-2009 04:57 AM
Re: Delaying DCL Commands
Why? If the data here is sufficiently sensitive that a wait through to a reboot is a design consideration, then that data likely needs to be expunged just as soon as it is no longer required by the application. In the application. Not at shutdown.
And data that is sufficiently sensitive to exposure (also) needs to be encrypted. And application and configuration management also becomes a factor.
And it is likely you know this, which then implies there are application-specific considerations here that are leading you to, bluntly, retrofit hacks to paper over security flaws.
Yet we're looking at after-the-fact DCL hackery.
So.
What (other) details are being omitted? Are the disks encrypted? Are the applications periodically audited and checked for integrity? Does the site have access to and have the tools necessary to make changes to the applications, or is this vendor-provided closed-source software?
This really smells like a nasty problem that some auditor noticed and didn't fully understand; that there's a rather more fundamental design bug lurking here.