Operating System - HP-UX
1833792 Members
2090 Online
110063 Solutions
New Discussion

Re: atjobs batch in cluster with ServiceGuard

 
Rene Mendez_4
Super Advisor

atjobs batch in cluster with ServiceGuard

Hello

Configuration:
HPUX 11.11
ServiceGuard: 11.16

Nodo1: lima
Nodo2: Apple

In Nodo1 the aplication configure (at batch) with user informix, in /var/spool/cron/atjobs, exist many file

1178523000.a
1178528400.a

I need move the atjobs to nodo2.

I copy the files in /var/spool/cron/atjobs, but no found.

Regards
Rene

5 REPLIES 5
Rasheed Tamton
Honored Contributor

Re: atjobs batch in cluster with ServiceGuard

Hello,

Did you check the /var/adm/cron/at.allow and at.deny files.

man at
man queuedefs
man cron

Regards.
Emil Velez
Honored Contributor

Re: atjobs batch in cluster with ServiceGuard

Why not just put the at and cron jobs on both systems and put some script logic that only runs the job if the package is running on the local node using information from cmviewcl
Tor-Arne Nostdal
Trusted Contributor

Re: atjobs batch in cluster with ServiceGuard

Hi Rene,
If you look into the at jobs you can see that there might be variables exported which could give you good reasons why NOT copy these jobs... more /var/spool/cron/atjobs/*.a
-------------
We solve this issue like in Emil's reply.

The at/cron jobs check if the package is running on "this host" before it execute the job.

See example below (a bit extra coding to make it more readable)

#/bin/sh
PACKAGE="mcsgPROD"
RUN_NODE="$(/usr/sbin/cmviewcl -p $PACKAGE | tail -1|awk '{print $5}' )"
THIS_HOST="$(hostname)"
[[ "$RUN_NODE" = "$THIS_HOST" ]] && {
# execute my job here
} || {
# exit since this is not the active node
exit
}

-------------
You can also place the jobs you want to run on a filesystem belonging to the package.
Then you can execute them if the file exist.
Example: Run a check each working day at 06:30 if the check is available (placed on a mcsg fs)

30 06 * * 1,2,3,4,5 [[ -r /my/MCSG/fs/check_script ]] && /my/MCSG/fs/check_script
-------------
If you still want to copy the jobs, you must ensure that you have identical setup on the two machines users(uid/gid), software, paths etc. And you must perform the copy and maintain the file ownership/permissions.

/Tor-Arne
I'm trying to become President of the state I'm in...
Tor-Arne Nostdal
Trusted Contributor

Re: atjobs batch in cluster with ServiceGuard

Just a small remark to my post:
There should be no linebreak in the crontab entry !!!

I have shortened it to show what I mean.
30 06 * * 1 [[ -r check_script ]] && check_script

/2r
I'm trying to become President of the state I'm in...
Rudy Williams
Regular Advisor

Re: atjobs batch in cluster with ServiceGuard

Rene--

I was working on a SG cluster that required that the at jobs were copied over upon a failure. I found that copying the job files in the queue was not enough, I had to determine the scheduled time and reschedule the jobs on the new node.

A great deal of logic is needed to write the scripts to do this. Upon a failure you need to copy the at jobs to a shared filesystem so that the adoptive node can schedule them. You also need to determine what time the jobs should be scheduled.

Now when you fail-back, you should make sure that any jobs are rescheduled on the primary node too. You might find it a good idea to write logic into the jobs to fail early on if a specific directory (on a package's shared filesystem) is not available.

Rudy