<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: OpenVMS cluster print queue setup in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174295#M62129</link>
    <description>Haven't seen this mentioned yet, but for AUTOSTART queues to work you need to execute the following command on any node that uses queues with that feature:&lt;BR /&gt;&lt;BR /&gt;$ enable autostart /queues</description>
    <pubDate>Mon, 08 Mar 2004 05:13:59 GMT</pubDate>
    <dc:creator>Uwe Zessin</dc:creator>
    <dc:date>2004-03-08T05:13:59Z</dc:date>
    <item>
      <title>OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174284#M62118</link>
      <description>OK - I know this is not a very popular topic but I have some questions about Cluster Print Queue Setup..&lt;BR /&gt;&lt;BR /&gt;I have a two node cluster running VMS 7.2-2 with one system disk.  No LPD no DQS just plain terminal printing via LAT and Multinet.  There is only one queue manager.&lt;BR /&gt;&lt;BR /&gt;System1 runs the queue manager&lt;BR /&gt;&lt;BR /&gt;Both system1 and system2 have queues defined in their startup.  The queue manager running on System1 is set to failover onto System2 in the event System1 goes down.&lt;BR /&gt;&lt;BR /&gt;I would like to set all my queues to autostart failover to the opposite system they are defined on.  I know that I would have to re-init each queue with /autostart_on=(system1::,system2::) and then in my startup start/queue on each queue and then $enable autostart/queue on system1 and system2.&lt;BR /&gt;&lt;BR /&gt;However, my question is two-fold:&lt;BR /&gt;&lt;BR /&gt;1)  If system1 crashes, the queue manager will move to run on system2, however, will the autostart queues defined on system1 automatically move over to system2?  The VMS documentation says on a system shutdown a $ disable autostart/queue is issued to move the queues over to the existing node if they are defined for failover.  However, there is no mention of on a Crash...&lt;BR /&gt;&lt;BR /&gt;2)  If indeed the queues do move over on a shutdown and/or a crash, once system1 is back up and stable, how do you move the queues that failed over back to the original system?&lt;BR /&gt;&lt;BR /&gt;3)  Also, if system1 crashed and the queue manager moved to system2, when system1 comes back on-line will it attempt to start the queue manager?  What about system2?&lt;BR /&gt;&lt;BR /&gt;Sorry for the long note... Thanks so much for your feedback..&lt;BR /&gt;&lt;BR /&gt;MC</description>
      <pubDate>Mon, 26 Jan 2004 15:13:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174284#M62118</guid>
      <dc:creator>M C_1</dc:creator>
      <dc:date>2004-01-26T15:13:40Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174285#M62119</link>
      <description>Hi MC,&lt;BR /&gt;&lt;BR /&gt;I can't answer quesiton 1 about the queue failover.  I think they do, but it has been so long since I tested that, I don't remember.&lt;BR /&gt;&lt;BR /&gt;About failing queues back, no they don't move back to SYSTEM01 when the startup performs the ENABLE AUTOSTART/QUEUE.  The way I move them is a stop/queue/reset &lt;QUEUE_NAME&gt; followed by a start/queue &lt;QUEUE_NAME&gt;.&lt;BR /&gt;&lt;BR /&gt;for the third question, it depends.  How do you do the start up of the queue manager in you SYSTARTUP_VMS.COM?  If you do something like START/QUEUE/MANAGER/ON=(SYSTEM1,SYSTEM2,*), then it should move it back to SYSTEM1 when it boots.  Otherwise I believe it stays where it is.&lt;BR /&gt;&lt;BR /&gt;Hope that helps.&lt;BR /&gt;&lt;BR /&gt;Dave Harrold&lt;/QUEUE_NAME&gt;&lt;/QUEUE_NAME&gt;</description>
      <pubDate>Mon, 26 Jan 2004 17:39:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174285#M62119</guid>
      <dc:creator>David Harrold</dc:creator>
      <dc:date>2004-01-26T17:39:54Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174286#M62120</link>
      <description>Hello MC.&lt;BR /&gt;&lt;BR /&gt;The answer to 1): &lt;BR /&gt;IF you have set up your queues to be allowed to run on both (all) systems, they DO failover correctly.&lt;BR /&gt;(Only jobs-in-progress might fail, depending on what is already in the printers' buffer and what still needs sending. If you set up your queue /RETAIN=ERROR you will recognise those immediately, and you can re-issue them.) Jobs pending are not affected.&lt;BR /&gt;&lt;BR /&gt;If you set up your queues to be /ON=(node1,node2,...), then after STOP/QUE &amp;amp; START/QUE they run on the first available the list that has autostart enabled. If you set them /ON=*, then they will run on the node where you issued the command.&lt;BR /&gt;&lt;BR /&gt;More important:&lt;BR /&gt;Do you have a really urgent reason to have them fail back when the failed node reboots?&lt;BR /&gt;The load caused by printqueues is negligable, the effort to fail them back by hand is not. I understand that your configuration DOES allow failover ( most important issue: all disks from which files might be printed must be mounted on all nodes where the queue might be running).&lt;BR /&gt;So essentially: Why care WHERE the queue is running, as long as it IS running.&lt;BR /&gt;&lt;BR /&gt;As far as I can judge from here (meaning: I might be wrong but I don't think so) you are trying to solve a problem that has already been solved much more fundamentally by the VMS cluster engeneers.&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Tue, 27 Jan 2004 04:17:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174286#M62120</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-01-27T04:17:21Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174287#M62121</link>
      <description>Hello&lt;BR /&gt;&lt;BR /&gt;You could consider using generic and execution queues. Initialize the execution queues like this for example&lt;BR /&gt;&lt;BR /&gt;$ init/queue system1$q1/on=system1::lta100:&lt;BR /&gt;$ init/queue system2$q1/on=system2::lta100:&lt;BR /&gt;&lt;BR /&gt;and the generic queue &lt;BR /&gt;&lt;BR /&gt;$ init/queue/generic=(system1$q1,system2$q1) q1&lt;BR /&gt;&lt;BR /&gt;Print to the generic queue q1. Jobs then get transferred to either of the execution queues for processing. If say system1 crashes print jobs still get processed on system2$q1. When system1 reboots print jobs will then be processed by both execution queues. No need to move queues "by hand".&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;ML&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 27 Jan 2004 05:33:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174287#M62121</guid>
      <dc:creator>Mac Lilley</dc:creator>
      <dc:date>2004-01-27T05:33:08Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174288#M62122</link>
      <description>David - start/queue/manager/on=(system1,system2,*) was issued way back when.  According to the doc's it does not need to be reissued in startup unless a stop/queue/manager/cluster is issued.&lt;BR /&gt;&lt;BR /&gt;Jan - Currently when I define queues, I set them up for one system only (i.e. /ON=(system1::xyz).  I would like to re-init all queues with /AUTOSTART_ON=(sys1,sys2) so that they will failover on a system shutdown or crash.&lt;BR /&gt;&lt;BR /&gt;The reason I want them to fail back once the other node is rebooted is because, we have between 600 and 650 queues.  We are very print intensive.  Lots of printing going on every minute,hour,day...  My biggest consern is that printing continues to execute in the event we loose a system for a period of time.  Second biggest consern, ballance print load between two systems.&lt;BR /&gt;&lt;BR /&gt;Mac - I considered this option, but because of the sheer number of queues that we have this could be a bit complicated..&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;MC&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 27 Jan 2004 09:01:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174288#M62122</guid>
      <dc:creator>M C_1</dc:creator>
      <dc:date>2004-01-27T09:01:35Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174289#M62123</link>
      <description>Well M C,&lt;BR /&gt;&lt;BR /&gt;with us 800+ printer queues effectively all running on the same node present no problem at all. We print some 5000 pages daily, with peaks over 15000. Of course I don't know HOW intensive is "intensively used", but you have to get WAY over that before you should begin worrying. &lt;BR /&gt;&lt;BR /&gt;I hope this give you some peace of mind.&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Tue, 27 Jan 2004 10:14:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174289#M62123</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-01-27T10:14:34Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174290#M62124</link>
      <description>I think the queues stay where they are without you stopping and starting them to move  back. You define them /AUTOSTART=(node1::,node2::)&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 27 Jan 2004 12:11:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174290#M62124</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-01-27T12:11:09Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174291#M62125</link>
      <description>Jan,&lt;BR /&gt; This does give me some piece of mind.  I mean we have lots of printers, and I am not really sure how much paper we eat each day but its a bunch.&lt;BR /&gt;&lt;BR /&gt;With respect to my question, I wanted this information to help analyze redundancy, availability, and failover of our cluster.&lt;BR /&gt;&lt;BR /&gt;The documentation is not 100 % clear as to what failover means with respect to these queues(Crash/Shutdown).&lt;BR /&gt;&lt;BR /&gt;Thanks for all the input.&lt;BR /&gt;&lt;BR /&gt;MC</description>
      <pubDate>Tue, 27 Jan 2004 15:16:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174291#M62125</guid>
      <dc:creator>M C_1</dc:creator>
      <dc:date>2004-01-27T15:16:04Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174292#M62126</link>
      <description>Quick question regarding this old topic...&lt;BR /&gt;&lt;BR /&gt;When you configure autostart on the cluster (in my case a 2 node cluster) does each system have to execute a start/queue on all queues for failover to be setup properly?&lt;BR /&gt;&lt;BR /&gt;For example:  Our current setup includes during startup; defining and configuring LAT and Multinet ports, then start individual queues on their respective nodes.&lt;BR /&gt;&lt;BR /&gt;What I intend to do is re-configure all queues to be autostart.  In the startup define and configure all ports on both systems, then enable autostart on both nodes in after the queue manager startes.  Then start / queue each of the queues.&lt;BR /&gt;&lt;BR /&gt;However, when both nodes boot do both nodes need to execute the start/queue portion?&lt;BR /&gt;&lt;BR /&gt;Also on a system crash, the queues fail over to the other node.  When the problem node comes back up does it really need to execute the start/queue part? &lt;BR /&gt;&lt;BR /&gt;Can anyone out there share their config preferences?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;MC</description>
      <pubDate>Tue, 02 Mar 2004 17:32:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174292#M62126</guid>
      <dc:creator>M C_1</dc:creator>
      <dc:date>2004-03-02T17:32:04Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174293#M62127</link>
      <description>Q&amp;gt;However, when both nodes boot do both nodes need to execute the start/queue portion?&lt;BR /&gt;&lt;BR /&gt;A&amp;gt; No. As long as the queue database is intact, none of the nodes have to do any queue setup. Only enable autostart/que is needed.&lt;BR /&gt;&lt;BR /&gt;Q&amp;gt;Also on a system crash, the queues fail over to the other node. When the problem node comes back up does it really need to execute the start/que part?&lt;BR /&gt;A&amp;gt;Not when they are setup as autostart. Only the enable autostart/que is needed. Of course, you must make sure they are not stopped by stop/que.&lt;BR /&gt;&lt;BR /&gt;Also : if you happen to have multiple system disks, make sure you spool to a shared disk.</description>
      <pubDate>Wed, 03 Mar 2004 02:43:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174293#M62127</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-03T02:43:42Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174294#M62128</link>
      <description>MC,&lt;BR /&gt;&lt;BR /&gt;if you specify /AUTOSTART, then there are two ways to specify your nodes: simply specify "*" for your nodename. It defines ALL nodes eligable for exeution; and your que will keep running on a node, until it is forced to fail over, when it will run on another node until etc...&lt;BR /&gt;The other way is to specify /AUTOSTART=(node_1,node_2,...). Now the queue will try to run on the first node specified, etc.&lt;BR /&gt;So if you want to spread the load (if various nodes are available), then specify a portion as (node_1, node_2,...), a portion as (node_2, node_1,...)  etc...&lt;BR /&gt;If a node fails, its queues fail over. When if comes back (more precisely, when it enables autostart), then those queues that have that node in their /AUTOSTART list ahead of the current execution node, those will fail over ( = fail back)&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Mon, 08 Mar 2004 04:38:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174294#M62128</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-03-08T04:38:58Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174295#M62129</link>
      <description>Haven't seen this mentioned yet, but for AUTOSTART queues to work you need to execute the following command on any node that uses queues with that feature:&lt;BR /&gt;&lt;BR /&gt;$ enable autostart /queues</description>
      <pubDate>Mon, 08 Mar 2004 05:13:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174295#M62129</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-03-08T05:13:59Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174296#M62130</link>
      <description>Jan,&lt;BR /&gt;&lt;BR /&gt;Just did a test. The failback has to be done by hand. (sorry for the bad format, they should make this form 80 char).&lt;BR /&gt;&lt;BR /&gt;I am on node sbetv1.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;@wim.lis&lt;BR /&gt;&amp;gt;ty wim.lis&lt;BR /&gt;$  INITIALIZE /QUEUE wim -&lt;BR /&gt;       /BATCH -&lt;BR /&gt;       /START -&lt;BR /&gt;       /AUTOSTART_ON = (sbetv1::,sbetv2::)&lt;BR /&gt;&amp;gt;sh que wim/fu&lt;BR /&gt;Batch queue WIM, idle, on SBETV1::&lt;BR /&gt;  /AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1&lt;BR /&gt;  /OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)&lt;BR /&gt;&amp;gt;disab auto/q&lt;BR /&gt;&amp;gt;sh que wim/fu&lt;BR /&gt;Batch queue WIM, idle, on SBETV2::&lt;BR /&gt;  /AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1&lt;BR /&gt;  /OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)&lt;BR /&gt;&amp;gt;enab auto/q&lt;BR /&gt;&amp;gt;sh que wim/fu&lt;BR /&gt;Batch queue WIM, idle, on SBETV2::&lt;BR /&gt;  /AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1&lt;BR /&gt;  /OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)&lt;BR /&gt;</description>
      <pubDate>Mon, 08 Mar 2004 05:49:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174296#M62130</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-08T05:49:26Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174297#M62131</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;I never had to use it (on the contrary, we try to be as homogeneous as possible), but that is why I remebered the help text:&lt;BR /&gt;&lt;BR /&gt;"&lt;BR /&gt;INIT/QUE/AUTO&lt;BR /&gt;&lt;BR /&gt;... you can specify more than one node ... in the preferred order in which nodes should claim the queue.&lt;BR /&gt;"&lt;BR /&gt;So, eigther my understanding of English is not exact enough, or the help text is not clear enough, or the text is not (any more?) correct.&lt;BR /&gt;&lt;BR /&gt;Sorry for any confusion I may have caused....&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Mon, 08 Mar 2004 09:50:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174297#M62131</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-03-08T09:50:08Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174298#M62132</link>
      <description>Jan,&lt;BR /&gt;&lt;BR /&gt;No problem. But if you were right, the command had to close the queue gracefully (/next) and then move it to the other node.&lt;BR /&gt;Since /next can take ages ...&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 08 Mar 2004 11:02:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174298#M62132</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-08T11:02:15Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174299#M62133</link>
      <description>Jan,&lt;BR /&gt;    I agree with Wim, the autostart queues will not fail back to the problem node after its back up.  And that is not necessarily a bad thing.  &lt;BR /&gt;&lt;BR /&gt;The reason I revived this topic was because I was unsure of how the queue manager worked with autostart queues.  I understand how to set them up, just was not clear on how VMS started them on boot up.  I confirmed what Wim said in a earlier reply, that autostart queues never stop unless someone or something issues stop/queue(even on shutdown).  Therefor, there is no need to start queues on the cluster during boot only Enable Auto/Que in startup.&lt;BR /&gt;&lt;BR /&gt;Thanks to everyone for their input.  It has been very helpful.&lt;BR /&gt;&lt;BR /&gt;MC &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Mar 2004 09:30:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174299#M62133</guid>
      <dc:creator>M C_1</dc:creator>
      <dc:date>2004-03-09T09:30:36Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS cluster print queue setup</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174300#M62134</link>
      <description>MC,&lt;BR /&gt;&lt;BR /&gt;If you want to do load balancing, schedule a job each day. For each queue, do a stop/que/next, wait until the queue is stopped, do a stop/que/reset (and a stop/id of the tcp symbionts because the symbion not always stops when doing stop/que) and a start/que.&lt;BR /&gt;&lt;BR /&gt;This will&lt;BR /&gt;1) start the queue on the first node listed in the autostart list&lt;BR /&gt;2) reset the queue and will eliminate 80% of your printer problems (e.g. symbiont bugs, quota problems, etc)&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 10 Mar 2004 02:48:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-cluster-print-queue-setup/m-p/3174300#M62134</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-10T02:48:45Z</dc:date>
    </item>
  </channel>
</rss>

