1752273 Members
5018 Online
108786 Solutions
New Discussion юеВ

queues+clustering

 
SOLVED
Go to solution
nipun_2
Regular Advisor

queues+clustering

Hello all,
I recently added a 3rd node (EV68-ds25alpha station) to a common environment cluster running 7.3-1. (for all of you who were answering my previous questions of blue screen - it needed graphics drivers for the new machine)

So currently we have
DS25-main node
XP1000- satellite node
EV68 - Satellite node (new)

Now when I can clearly see the decwindows and software applications on the New node EV68. However, when I run evaluations(other such high processing applications) on EV68 it does not use it's own CPU but of the other two nodes?

I saw that when I looked at Monitor cluster command

Can anyone please guide me as to how I can check the current queing setup and how I should add my node to it. Please mention the *.com files as well as their path if possible.

Thanks in advance
Nipun
28 REPLIES 28
Uwe Zessin
Honored Contributor

Re: queues+clustering

Hello Nipun,
it's great to hear that you got your system working.

To get an overview over all queues:
$ show queue *

If you're interested in batch queues only:
$ show queue /batch *

For details, add the /FULL qualifier and if you like to see all jobs from all users use the /ALL qualifier.

I usually use a queue name that has the nodename as part of the queue name, e.g.:
$ initialize/queue/batch/job_limit=2 athena_batch /on=ATHENA::

Then, as part of the system startup (I put it in SYS$MANAGER:SYSTARTUP_VMS.COM):
$ start /queue athena_batch

Jobs can easily sent to this queue with:
$ submit /queue=athena_batch job.com
.
Wim Van den Wyngaert
Honored Contributor

Re: queues+clustering

Also check if the failover of the queue manager is OK. If not, a node failure may stop all queue activity.

$ sh que/man/fu
should show all 3 servers in the /on part.

To correct :
$ start/que/man/on=(n1,n2,n3)
or
$ start/que/man/on=(*)
but if you have cluster stations or quorum stations the first format is better.

Wim
Wim
Joseph Huber_1
Honored Contributor
Solution

Re: queues+clustering



In general You should have at least on every cluster manager one dedicated queue, which has /(AUTOSTART_)ON=mynode. This way You can have whatever programs running on this node.
You need such a queue e.g. for jobs fired from sys$startup_vms.
My practice then is to have a /GENERIC queue, which includes all of these node-specific queues:
$ init/queue/batch/generic=(nodea_batch,nodeb_batch,nodec_batch) sys$batch

Then jobs submitted to the generic queue will run on any node selected by the queue manager.
This generic queue can of course have any name other than sys$batch: it is just convenient to have a sys$batch queue for submit without a /QUEUE=specific.

On the other hand You may choose to have jobs by default executing on the node where it is SUBMITted, then define a /system logical SYS$BATCH pointing to the node specific queue in sys$startup_vms:
$ define/system sys$batch 'f$getsyi("NODENAME")'_BATCH
http://www.mpp.mpg.de/~huber
Ian Miller.
Honored Contributor

Re: queues+clustering

I basically do what Joseph does - queue called nodename_BATCH and define SYS$BATCH on each system to point to it. Often there are other specific purpose queues e.g queues with a job limit of 1 to co-ordinate jobs and queues with different working set limits or base priorities and so on.
____________________
Purely Personal Opinion
Joseph Huber_1
Honored Contributor

Re: queues+clustering

To correct a typo in my previous respone:

on every cluster manager one dedicated
should of course read as:
on every cluster MEMBER one dedicated


http://www.mpp.mpg.de/~huber
Wim Van den Wyngaert
Honored Contributor

Re: queues+clustering

I would say cluster SERVER. Stations and quorum stuff excluded.
Wim
Joseph Huber_1
Honored Contributor

Re: queues+clustering

Wim, well it's just personal taste:
I have a dedicated queue on every member node in the cluster: at system startup I submit some utility/package startup in a batch job to make system startup shorter. Especially on a workstation: don't have to wait longer until I can login.
http://www.mpp.mpg.de/~huber
Wim Van den Wyngaert
Honored Contributor

Re: queues+clustering

Jos : ignore my comment. I read to fast.
Wim
Jan van den Ende
Honored Contributor

Re: queues+clustering

Nipun,

maybe just a matter of taste, but personally I do not like the idea of SYS$BATCH as name or alias for a clusterwide queue.
And here is why: during bootstrap it is just too convenient to KNOW that any job submitted withit explicit queuue specification will run ON THE BOOTING NODE.
Otherwise you will have to check (and re-check at every new release and new software product) that there is NOT any default submit command!

That is the reason I 100% agree with Ian: on each node a /SYSTEM logical name SYS$BATCH for the queue that is bound to that node.

OTOH, the concept of dedicated queues can (maybe; should) be applied generously.

We run multiple applications in the cluster, and each application has at least one queue. The queue is owned by the application's Resource Identifier, and resp. the application managers have management control over their queues.
A very pleasant way to delegate a lot of standard work to those that have the functional application knowledge, so now we get only the really technical issues.

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.