- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: job is executing in node2 in cluster.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-01-2009 08:42 PM
тАО01-01-2009 08:42 PM
job is executing in node2 in cluster.
I'm managing one VMS cluster having 2 nodes(say node1 & node2) and defined one scheduler job which is runnig everyday in last 2 month and collect/generate the user profile listing with name node1.txt and send this file to remote server thru SCP with name node1.txt.
Suddenly this job generating this file with name node2.txt instead of node1.txt and sending file with name node2.txt.
There is no changes happend into system.
OS : OpenVMS V8.2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-01-2009 09:51 PM
тАО01-01-2009 09:51 PM
Re: job is executing in node2 in cluster.
Do you mean a batch job? As in SUBMIT?
> Suddenly this job [...]
So, if I understand this, you have a command
procedure, which you're hiding, you run it
in some unknown way, and you want someone to
tell you why it does what it does?
Good luck. My psychic powers are too weak
for me to guess what's happening, based on no
useful information.
> There is no changes happend into system.
Even knowing nothing, I'd guess that
_something_ has changed, or else you wouldn't
be here asking this question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-01-2009 10:46 PM
тАО01-01-2009 10:46 PM
Re: job is executing in node2 in cluster.
1.we are using CA scheduler product to schedule the jobs.
2.we are using Generic command procedure to collect the user listing profile and
node = f$getsyi("nodename")in command procedure.
3.Till yesterday job was creating file node1.txt,but now the output file it is showing node2.txt.
My Question:
===========
As the job is running in node1 then why is output file node2.txt creating. Both are in cluster.
If you need more information let me know please. Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2009 12:07 AM
тАО01-02-2009 12:07 AM
Re: job is executing in node2 in cluster.
I have never used CA scheduler, but with VMS queues it is possible to create generic queues, which can then run on any execution queue specified for the generic queue. Most likely, CA scheduler has the same capability to schedule a job on any node. For many things, it is not important what node a job runs on, therefore for availability, scheduling a job on a generic queue is can make it more likely that a machine to run on will be available.
If you have accounting enabled, you can see where the job actually ran.
If you want the job to run on a specific node, you will have to read the CA scheduler documentation to see how that can be done.
Just because it happened to pick node2 to run on is no guarantee that it will always pick that node. Perhaps the load has changed and the scheduler decided it was best to run the job on node1 the first two months, but now it thinks node2 is better.
Jon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2009 12:10 AM
тАО01-02-2009 12:10 AM
Re: job is executing in node2 in cluster.
That should work correctly. (I'll assume
that the file name is created using this
"node" variable.)
> As the job is running in node1 then why is
> output file node2.txt creating. Both are
> in cluster.
_If_ the job were running on node1, then I'd
expect "node" to be "node1". If it comes out
as "node2", then I'd tend to believe that the
job is really running on node2. Do you have
any good reason to believe that it's really
running on node1?
Have you looked at what's happening where
while the job is running?
I know nothing about "CA scheduler", but it
might be smart enough to run a job on any
node in the cluster. Perhaps it looks for
the node with the most free time, and node1
is now busier than it was before.
Step1: Find out where the job actually runs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2009 01:35 AM
тАО01-02-2009 01:35 AM
Re: job is executing in node2 in cluster.
message coming:JOB completed from NODE2.
But again, Why job is running from NODE2, as it should work from NODE1 as it was.
Note:This is production server we can't restart the scheduler.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2009 02:37 AM
тАО01-02-2009 02:37 AM
Re: job is executing in node2 in cluster.
It may sound like semantics, but the question is: Where did the job run? It is not: Where did it report that it ran?
Check the actual log file and accounting logs to determine where the job actually executed.
One may ask: How could this happen? The simple answer is that I have seen various jobs which take their node name as a parameter (which is fixed at the time of submission) rather than determining it using the DCL lexical function F$GETSYI or the analogous system service or RTL calls. This leads to incorrect file names and misleading messages.
Since the actual command procedure is also a mystery (at this time), there is no way to know if the message reporting execution on NODE2 is correct or incorrect.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2009 04:06 AM
тАО01-02-2009 04:06 AM
Re: job is executing in node2 in cluster.
The info I am missing is a full specification of the queue.
In a cluster environment, it is usual to specify that a queue may execute on multiple (by default: ALL) nodes.
At a certain moment in time it executes on a certain node, but for various reasons the queue manager may fail queues over to another node.
This MIGHT have happened here also.
Please provide the output of a
$ SHOW QUEUE/FULL
and we might rule this out, or call this a reasonable explanation.
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2009 06:55 AM
тАО01-02-2009 06:55 AM
Re: job is executing in node2 in cluster.
> as it should work from NODE1 as it was.
Why, exactly, should it run on any particular
node? When did you it where to run?
Don't tell me how it "should work", tell the
fellow (or program) who runs the job.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2009 07:39 AM
тАО01-02-2009 07:39 AM
Re: job is executing in node2 in cluster.
if you have a homogenous cluster, it should not matter, on which of the node a job may run. If it is a requirement for that job to be run on a specific node in the cluster, you may need to specify this requirement in the scheduler job definition by e.g. including something like /NODE=xxx
Volker.