<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic queues+clustering in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897254#M68161</link>
    <description>Hello all,&lt;BR /&gt;I recently added a 3rd node (EV68-ds25alpha station) to a common environment cluster running 7.3-1. (for all of you who were answering my previous questions of blue screen - it needed graphics drivers for the new machine)&lt;BR /&gt;&lt;BR /&gt;So currently we have &lt;BR /&gt;DS25-main node&lt;BR /&gt;XP1000- satellite node&lt;BR /&gt;EV68 - Satellite node (new)&lt;BR /&gt;&lt;BR /&gt;Now when I can clearly see the decwindows and software applications on the New node EV68. However, when I run evaluations(other such high processing applications) on EV68 it does not use it's own CPU but of the other two nodes? &lt;BR /&gt;&lt;BR /&gt;I saw that when I looked at Monitor cluster command&lt;BR /&gt;&lt;BR /&gt;Can anyone please guide me as to how I can check the current queing setup and how I should add my node to it. Please mention the *.com files as well as their path if possible.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;Nipun</description>
    <pubDate>Wed, 20 Apr 2005 14:19:55 GMT</pubDate>
    <dc:creator>nipun_2</dc:creator>
    <dc:date>2005-04-20T14:19:55Z</dc:date>
    <item>
      <title>queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897254#M68161</link>
      <description>Hello all,&lt;BR /&gt;I recently added a 3rd node (EV68-ds25alpha station) to a common environment cluster running 7.3-1. (for all of you who were answering my previous questions of blue screen - it needed graphics drivers for the new machine)&lt;BR /&gt;&lt;BR /&gt;So currently we have &lt;BR /&gt;DS25-main node&lt;BR /&gt;XP1000- satellite node&lt;BR /&gt;EV68 - Satellite node (new)&lt;BR /&gt;&lt;BR /&gt;Now when I can clearly see the decwindows and software applications on the New node EV68. However, when I run evaluations(other such high processing applications) on EV68 it does not use it's own CPU but of the other two nodes? &lt;BR /&gt;&lt;BR /&gt;I saw that when I looked at Monitor cluster command&lt;BR /&gt;&lt;BR /&gt;Can anyone please guide me as to how I can check the current queing setup and how I should add my node to it. Please mention the *.com files as well as their path if possible.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;Nipun</description>
      <pubDate>Wed, 20 Apr 2005 14:19:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897254#M68161</guid>
      <dc:creator>nipun_2</dc:creator>
      <dc:date>2005-04-20T14:19:55Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897255#M68162</link>
      <description>Hello Nipun,&lt;BR /&gt;it's great to hear that you got your system working.&lt;BR /&gt;&lt;BR /&gt;To get an overview over all queues:&lt;BR /&gt;$ show queue *&lt;BR /&gt;&lt;BR /&gt;If you're interested in batch queues only:&lt;BR /&gt;$ show queue /batch *&lt;BR /&gt;&lt;BR /&gt;For details, add the /FULL qualifier and if you like to see all jobs from all users use the /ALL qualifier.&lt;BR /&gt;&lt;BR /&gt;I usually use a queue name that has the nodename as part of the queue name, e.g.:&lt;BR /&gt;$ initialize/queue/batch/job_limit=2 athena_batch /on=ATHENA::&lt;BR /&gt;&lt;BR /&gt;Then, as part of the system startup (I put it in SYS$MANAGER:SYSTARTUP_VMS.COM):&lt;BR /&gt;$ start /queue athena_batch&lt;BR /&gt;&lt;BR /&gt;Jobs can easily sent to this queue with:&lt;BR /&gt;$ submit /queue=athena_batch job.com</description>
      <pubDate>Wed, 20 Apr 2005 14:56:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897255#M68162</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-20T14:56:39Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897256#M68163</link>
      <description>Also check if the failover of the queue manager is OK. If not, a node failure may stop all queue activity.&lt;BR /&gt;&lt;BR /&gt;$ sh que/man/fu&lt;BR /&gt;should show all 3 servers in the /on part.&lt;BR /&gt;&lt;BR /&gt;To correct :&lt;BR /&gt;$ start/que/man/on=(n1,n2,n3)&lt;BR /&gt;or&lt;BR /&gt;$ start/que/man/on=(*)&lt;BR /&gt;but if you have cluster stations or quorum stations the first format is better.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 21 Apr 2005 01:19:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897256#M68163</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-21T01:19:09Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897257#M68164</link>
      <description>&lt;BR /&gt;&lt;BR /&gt;In general You should have at least on every cluster manager one dedicated queue, which has /(AUTOSTART_)ON=mynode. This way You can have whatever programs running on this node.&lt;BR /&gt;You need such a queue e.g. for jobs fired from sys$startup_vms.&lt;BR /&gt;My practice then is to have a /GENERIC queue, which includes all of these node-specific queues:&lt;BR /&gt;$ init/queue/batch/generic=(nodea_batch,nodeb_batch,nodec_batch) sys$batch&lt;BR /&gt;&lt;BR /&gt;Then jobs submitted to the generic queue will run on any node selected by the queue manager.&lt;BR /&gt;This generic queue can of course have any name other than sys$batch: it is just convenient to have a sys$batch queue for submit without a /QUEUE=specific.&lt;BR /&gt;&lt;BR /&gt;On the other hand You may choose to have jobs by default executing on the node where it is SUBMITted, then define a /system logical SYS$BATCH pointing to the node specific queue in sys$startup_vms:&lt;BR /&gt;$ define/system sys$batch 'f$getsyi("NODENAME")'_BATCH&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Apr 2005 03:12:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897257#M68164</guid>
      <dc:creator>Joseph Huber_1</dc:creator>
      <dc:date>2005-04-21T03:12:05Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897258#M68165</link>
      <description>I basically do what Joseph does - queue called nodename_BATCH and define SYS$BATCH on each system to point to it. Often there are other specific purpose queues e.g queues with a job limit of 1 to co-ordinate jobs and queues with different working set limits or base priorities and so on.</description>
      <pubDate>Thu, 21 Apr 2005 03:23:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897258#M68165</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-04-21T03:23:38Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897259#M68166</link>
      <description>To correct a typo in my previous respone:&lt;BR /&gt;&lt;BR /&gt;on every cluster manager one dedicated&lt;BR /&gt;should of course read as:&lt;BR /&gt;on every cluster MEMBER one dedicated&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Apr 2005 03:51:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897259#M68166</guid>
      <dc:creator>Joseph Huber_1</dc:creator>
      <dc:date>2005-04-21T03:51:40Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897260#M68167</link>
      <description>I would say cluster SERVER. Stations and quorum stuff excluded.</description>
      <pubDate>Thu, 21 Apr 2005 03:58:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897260#M68167</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-21T03:58:00Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897261#M68168</link>
      <description>Wim, well it's just personal taste:&lt;BR /&gt;I have a dedicated queue on every member node in the cluster: at system startup I submit some utility/package startup in a batch job to make system startup shorter. Especially on a workstation: don't have to wait longer until I can login.</description>
      <pubDate>Thu, 21 Apr 2005 04:18:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897261#M68168</guid>
      <dc:creator>Joseph Huber_1</dc:creator>
      <dc:date>2005-04-21T04:18:41Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897262#M68169</link>
      <description>Jos : ignore my comment. I read to fast.</description>
      <pubDate>Thu, 21 Apr 2005 04:20:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897262#M68169</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-21T04:20:58Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897263#M68170</link>
      <description>Nipun,&lt;BR /&gt;&lt;BR /&gt;maybe just a matter of taste, but personally I do not like the idea of SYS$BATCH as name or alias for a clusterwide queue.&lt;BR /&gt;And here is why: during bootstrap it is just too convenient to KNOW that any job submitted withit explicit queuue specification will run ON THE BOOTING NODE.&lt;BR /&gt;Otherwise you will have to check (and re-check at every new release and new software product) that there is NOT any default submit command!&lt;BR /&gt;&lt;BR /&gt;That is the reason I 100% agree with Ian: on each node a /SYSTEM logical name SYS$BATCH for the queue that is bound to that node.&lt;BR /&gt;&lt;BR /&gt;OTOH, the concept of dedicated queues can (maybe; should) be applied generously.&lt;BR /&gt;&lt;BR /&gt;We run multiple applications in the cluster, and each application has at least one queue. The queue is owned by the application's Resource Identifier, and resp. the application managers have management control over their queues.&lt;BR /&gt;A very pleasant way to delegate a lot of standard work to those that have the functional application knowledge, so now we get only the really technical issues.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Apr 2005 14:43:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897263#M68170</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-04-21T14:43:48Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897264#M68171</link>
      <description>During boot, I create/adjust 2 queues.&lt;BR /&gt;&lt;BR /&gt;The first is called node$startup and has a job limit of 99. It is NOT a autostart queue and can be used during the boot to start things. The queue is emptied in the beginning of the boot.&lt;BR /&gt;&lt;BR /&gt;The seconds one is node$batch with a job limit of 3 to which I map sys$batch. It is an autostart queue and everything that is started in it is only started when the boot is complete, t.i. the last thing I do during startup is enable autostart. It is not emptied during boot.&lt;BR /&gt;&lt;BR /&gt;If an applications requires timely execution of a job, it must create its own queue. If not, it can use sys$batch but the queue may be busy and the job is delayed.&lt;BR /&gt;&lt;BR /&gt;The advantage is that all applications queues are autostart queues and only start doing things after the boot, thus not delaying the boot. Also, the jobs start only after the boot, so their environment should be present.&lt;BR /&gt;&lt;BR /&gt;And all my queues are retain=error. Don't understand why it's not the default.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 25 Apr 2005 01:21:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897264#M68171</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-25T01:21:16Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897265#M68172</link>
      <description>A self-resubmitting job that ends with an error status every time can fill up the queue file.</description>
      <pubDate>Mon, 25 Apr 2005 01:30:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897265#M68172</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-25T01:30:50Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897266#M68173</link>
      <description>Uwe,&lt;BR /&gt;&lt;BR /&gt;Yes but why are they present ? We also had jobs that terminated in error but we corrected them all. Only in case of exceptions, they may or must terminate in error. And we monitor error entries, of course.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 25 Apr 2005 01:40:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897266#M68173</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-25T01:40:25Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897267#M68174</link>
      <description>What works good for you does not work well for somebody else and engineering had to make a decision. Of course, mine's only a guess - it is quite possible that today nobody can tell on what base the decision was made many years ago.</description>
      <pubDate>Mon, 25 Apr 2005 01:47:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897267#M68174</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-25T01:47:07Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897268#M68175</link>
      <description>Nipun,&lt;BR /&gt;&lt;BR /&gt;Possibly I'm missing the begin of the discussion, but I don't understand what you mean with 'main node' and 'satellite node'.&lt;BR /&gt;Don't you have a CI/SCSI/SAN-cluster but do you have LAVC idea of a cluster ?&lt;BR /&gt;Other words: Do you have a (boot)node and the other nodes are satelites ?&lt;BR /&gt;&lt;BR /&gt;I know this is not the major reason for difference setting up, but the QUEUE/MANAGER can't be clusterwide if you have a LAVC sort of.&lt;BR /&gt;&lt;BR /&gt;To be honest: I think the solutions (what ever you choose) is given in one of the previous postes.&lt;BR /&gt;&lt;BR /&gt;AvR&lt;BR /&gt;</description>
      <pubDate>Mon, 25 Apr 2005 01:53:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897268#M68175</guid>
      <dc:creator>Anton van Ruitenbeek</dc:creator>
      <dc:date>2005-04-25T01:53:56Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897269#M68176</link>
      <description>Anton,&lt;BR /&gt;who/what manual told You in a LAVC there can't be a cluster-wide queue management ?&lt;BR /&gt;Must be a on a different VMS planet ...</description>
      <pubDate>Mon, 25 Apr 2005 02:15:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897269#M68176</guid>
      <dc:creator>Joseph Huber_1</dc:creator>
      <dc:date>2005-04-25T02:15:22Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897270#M68177</link>
      <description>&lt;ANTON&gt;&lt;/ANTON&gt;&lt;WHO&gt;&lt;/WHO&gt;&lt;MUST be="" a="" on="" a="" different="" vms="" planet="" ...=""&gt;&lt;/MUST&gt;&lt;BR /&gt;May be, he meant, that is not useful to start the queuemanager on satellite nodes, when the queuefiles reside on the bootnode...&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Mon, 25 Apr 2005 03:22:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897270#M68177</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2005-04-25T03:22:22Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897271#M68178</link>
      <description>Joseph,&lt;BR /&gt;&lt;BR /&gt;I was on planet Eath.&lt;BR /&gt;&lt;BR /&gt;Thanks Karl.&lt;BR /&gt;&lt;BR /&gt;I ment if you realy have satelites (in the contents of LAVC) these satelites don't have any disks (only for paging/swapping) localy. For NI clusters there is a difference. So if the queue manager is running and the bootnode(s) are gone (and in this way the systemdisk(s) (and if exists the cluster comman data disk) are gone), and the queuemanager is running on all the node,  the queuemanager maybe will work but not as expected.&lt;BR /&gt;And as good VMS manager, if something will work but cannot be garenteed: IT IS NOT WORKING (properly....)&lt;BR /&gt;&lt;BR /&gt;AvR</description>
      <pubDate>Mon, 25 Apr 2005 03:35:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897271#M68178</guid>
      <dc:creator>Anton van Ruitenbeek</dc:creator>
      <dc:date>2005-04-25T03:35:46Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897272#M68179</link>
      <description>Anton wrote:&lt;BR /&gt;&lt;BR /&gt;if something will work but cannot be garenteed: IT IS NOT WORKING (properly....)&lt;BR /&gt;&lt;BR /&gt;Can someone make a nice tune for this, and turm it into a mantra that needs to be sung 3 times by _EVERY_ IT worker AND MANAGER before each working day?&lt;BR /&gt;.. even reading it once will probably be enlightening for most managers..&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 25 Apr 2005 03:49:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897272#M68179</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-04-25T03:49:31Z</dc:date>
    </item>
    <item>
      <title>Re: queues+clustering</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897273#M68180</link>
      <description>I think that argument is flawed.&lt;BR /&gt;&lt;BR /&gt;If your boot node is down, then your satellite nodes can't do any work anyway while they stall.&lt;BR /&gt;&lt;BR /&gt;So you can't use any satellites with a single boot node, because it is not *guaranteed* that they will be working 'properly' all the time ;-)</description>
      <pubDate>Mon, 25 Apr 2005 03:56:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/queues-clustering/m-p/4897273#M68180</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-25T03:56:46Z</dc:date>
    </item>
  </channel>
</rss>

