- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- queues+clustering
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 07:19 AM
тАО04-20-2005 07:19 AM
I recently added a 3rd node (EV68-ds25alpha station) to a common environment cluster running 7.3-1. (for all of you who were answering my previous questions of blue screen - it needed graphics drivers for the new machine)
So currently we have
DS25-main node
XP1000- satellite node
EV68 - Satellite node (new)
Now when I can clearly see the decwindows and software applications on the New node EV68. However, when I run evaluations(other such high processing applications) on EV68 it does not use it's own CPU but of the other two nodes?
I saw that when I looked at Monitor cluster command
Can anyone please guide me as to how I can check the current queing setup and how I should add my node to it. Please mention the *.com files as well as their path if possible.
Thanks in advance
Nipun
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 07:56 AM
тАО04-20-2005 07:56 AM
Re: queues+clustering
it's great to hear that you got your system working.
To get an overview over all queues:
$ show queue *
If you're interested in batch queues only:
$ show queue /batch *
For details, add the /FULL qualifier and if you like to see all jobs from all users use the /ALL qualifier.
I usually use a queue name that has the nodename as part of the queue name, e.g.:
$ initialize/queue/batch/job_limit=2 athena_batch /on=ATHENA::
Then, as part of the system startup (I put it in SYS$MANAGER:SYSTARTUP_VMS.COM):
$ start /queue athena_batch
Jobs can easily sent to this queue with:
$ submit /queue=athena_batch job.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 06:19 PM
тАО04-20-2005 06:19 PM
Re: queues+clustering
$ sh que/man/fu
should show all 3 servers in the /on part.
To correct :
$ start/que/man/on=(n1,n2,n3)
or
$ start/que/man/on=(*)
but if you have cluster stations or quorum stations the first format is better.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 08:12 PM
тАО04-20-2005 08:12 PM
SolutionIn general You should have at least on every cluster manager one dedicated queue, which has /(AUTOSTART_)ON=mynode. This way You can have whatever programs running on this node.
You need such a queue e.g. for jobs fired from sys$startup_vms.
My practice then is to have a /GENERIC queue, which includes all of these node-specific queues:
$ init/queue/batch/generic=(nodea_batch,nodeb_batch,nodec_batch) sys$batch
Then jobs submitted to the generic queue will run on any node selected by the queue manager.
This generic queue can of course have any name other than sys$batch: it is just convenient to have a sys$batch queue for submit without a /QUEUE=specific.
On the other hand You may choose to have jobs by default executing on the node where it is SUBMITted, then define a /system logical SYS$BATCH pointing to the node specific queue in sys$startup_vms:
$ define/system sys$batch 'f$getsyi("NODENAME")'_BATCH
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 08:23 PM
тАО04-20-2005 08:23 PM
Re: queues+clustering
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 08:51 PM
тАО04-20-2005 08:51 PM
Re: queues+clustering
on every cluster manager one dedicated
should of course read as:
on every cluster MEMBER one dedicated
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 08:58 PM
тАО04-20-2005 08:58 PM
Re: queues+clustering
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 09:18 PM
тАО04-20-2005 09:18 PM
Re: queues+clustering
I have a dedicated queue on every member node in the cluster: at system startup I submit some utility/package startup in a batch job to make system startup shorter. Especially on a workstation: don't have to wait longer until I can login.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-20-2005 09:20 PM
тАО04-20-2005 09:20 PM
Re: queues+clustering
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2005 07:43 AM
тАО04-21-2005 07:43 AM
Re: queues+clustering
maybe just a matter of taste, but personally I do not like the idea of SYS$BATCH as name or alias for a clusterwide queue.
And here is why: during bootstrap it is just too convenient to KNOW that any job submitted withit explicit queuue specification will run ON THE BOOTING NODE.
Otherwise you will have to check (and re-check at every new release and new software product) that there is NOT any default submit command!
That is the reason I 100% agree with Ian: on each node a /SYSTEM logical name SYS$BATCH for the queue that is bound to that node.
OTOH, the concept of dedicated queues can (maybe; should) be applied generously.
We run multiple applications in the cluster, and each application has at least one queue. The queue is owned by the application's Resource Identifier, and resp. the application managers have management control over their queues.
A very pleasant way to delegate a lot of standard work to those that have the functional application knowledge, so now we get only the really technical issues.
Proost.
Have one on me.
jpe