Operating System - OpenVMS
1827652 Members
3682 Online
109966 Solutions
New Discussion

Re: OpenVMS cluster print queue setup

 
SOLVED
Go to solution
M C_1
Advisor

OpenVMS cluster print queue setup

OK - I know this is not a very popular topic but I have some questions about Cluster Print Queue Setup..

I have a two node cluster running VMS 7.2-2 with one system disk. No LPD no DQS just plain terminal printing via LAT and Multinet. There is only one queue manager.

System1 runs the queue manager

Both system1 and system2 have queues defined in their startup. The queue manager running on System1 is set to failover onto System2 in the event System1 goes down.

I would like to set all my queues to autostart failover to the opposite system they are defined on. I know that I would have to re-init each queue with /autostart_on=(system1::,system2::) and then in my startup start/queue on each queue and then $enable autostart/queue on system1 and system2.

However, my question is two-fold:

1) If system1 crashes, the queue manager will move to run on system2, however, will the autostart queues defined on system1 automatically move over to system2? The VMS documentation says on a system shutdown a $ disable autostart/queue is issued to move the queues over to the existing node if they are defined for failover. However, there is no mention of on a Crash...

2) If indeed the queues do move over on a shutdown and/or a crash, once system1 is back up and stable, how do you move the queues that failed over back to the original system?

3) Also, if system1 crashed and the queue manager moved to system2, when system1 comes back on-line will it attempt to start the queue manager? What about system2?

Sorry for the long note... Thanks so much for your feedback..

MC
It is what it is!
16 REPLIES 16
David Harrold
Advisor

Re: OpenVMS cluster print queue setup

Hi MC,

I can't answer quesiton 1 about the queue failover. I think they do, but it has been so long since I tested that, I don't remember.

About failing queues back, no they don't move back to SYSTEM01 when the startup performs the ENABLE AUTOSTART/QUEUE. The way I move them is a stop/queue/reset followed by a start/queue .

for the third question, it depends. How do you do the start up of the queue manager in you SYSTARTUP_VMS.COM? If you do something like START/QUEUE/MANAGER/ON=(SYSTEM1,SYSTEM2,*), then it should move it back to SYSTEM1 when it boots. Otherwise I believe it stays where it is.

Hope that helps.

Dave Harrold
Jan van den Ende
Honored Contributor

Re: OpenVMS cluster print queue setup

Hello MC.

The answer to 1):
IF you have set up your queues to be allowed to run on both (all) systems, they DO failover correctly.
(Only jobs-in-progress might fail, depending on what is already in the printers' buffer and what still needs sending. If you set up your queue /RETAIN=ERROR you will recognise those immediately, and you can re-issue them.) Jobs pending are not affected.

If you set up your queues to be /ON=(node1,node2,...), then after STOP/QUE & START/QUE they run on the first available the list that has autostart enabled. If you set them /ON=*, then they will run on the node where you issued the command.

More important:
Do you have a really urgent reason to have them fail back when the failed node reboots?
The load caused by printqueues is negligable, the effort to fail them back by hand is not. I understand that your configuration DOES allow failover ( most important issue: all disks from which files might be printed must be mounted on all nodes where the queue might be running).
So essentially: Why care WHERE the queue is running, as long as it IS running.

As far as I can judge from here (meaning: I might be wrong but I don't think so) you are trying to solve a problem that has already been solved much more fundamentally by the VMS cluster engeneers.

hth,

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Mac Lilley
Frequent Advisor

Re: OpenVMS cluster print queue setup

Hello

You could consider using generic and execution queues. Initialize the execution queues like this for example

$ init/queue system1$q1/on=system1::lta100:
$ init/queue system2$q1/on=system2::lta100:

and the generic queue

$ init/queue/generic=(system1$q1,system2$q1) q1

Print to the generic queue q1. Jobs then get transferred to either of the execution queues for processing. If say system1 crashes print jobs still get processed on system2$q1. When system1 reboots print jobs will then be processed by both execution queues. No need to move queues "by hand".

HTH

ML

M C_1
Advisor

Re: OpenVMS cluster print queue setup

David - start/queue/manager/on=(system1,system2,*) was issued way back when. According to the doc's it does not need to be reissued in startup unless a stop/queue/manager/cluster is issued.

Jan - Currently when I define queues, I set them up for one system only (i.e. /ON=(system1::xyz). I would like to re-init all queues with /AUTOSTART_ON=(sys1,sys2) so that they will failover on a system shutdown or crash.

The reason I want them to fail back once the other node is rebooted is because, we have between 600 and 650 queues. We are very print intensive. Lots of printing going on every minute,hour,day... My biggest consern is that printing continues to execute in the event we loose a system for a period of time. Second biggest consern, ballance print load between two systems.

Mac - I considered this option, but because of the sheer number of queues that we have this could be a bit complicated..

Thanks
MC

It is what it is!
Jan van den Ende
Honored Contributor

Re: OpenVMS cluster print queue setup

Well M C,

with us 800+ printer queues effectively all running on the same node present no problem at all. We print some 5000 pages daily, with peaks over 15000. Of course I don't know HOW intensive is "intensively used", but you have to get WAY over that before you should begin worrying.

I hope this give you some peace of mind.

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Ian Miller.
Honored Contributor

Re: OpenVMS cluster print queue setup

I think the queues stay where they are without you stopping and starting them to move back. You define them /AUTOSTART=(node1::,node2::)

____________________
Purely Personal Opinion
M C_1
Advisor

Re: OpenVMS cluster print queue setup

Jan,
This does give me some piece of mind. I mean we have lots of printers, and I am not really sure how much paper we eat each day but its a bunch.

With respect to my question, I wanted this information to help analyze redundancy, availability, and failover of our cluster.

The documentation is not 100 % clear as to what failover means with respect to these queues(Crash/Shutdown).

Thanks for all the input.

MC
It is what it is!
M C_1
Advisor

Re: OpenVMS cluster print queue setup

Quick question regarding this old topic...

When you configure autostart on the cluster (in my case a 2 node cluster) does each system have to execute a start/queue on all queues for failover to be setup properly?

For example: Our current setup includes during startup; defining and configuring LAT and Multinet ports, then start individual queues on their respective nodes.

What I intend to do is re-configure all queues to be autostart. In the startup define and configure all ports on both systems, then enable autostart on both nodes in after the queue manager startes. Then start / queue each of the queues.

However, when both nodes boot do both nodes need to execute the start/queue portion?

Also on a system crash, the queues fail over to the other node. When the problem node comes back up does it really need to execute the start/queue part?

Can anyone out there share their config preferences?

Thanks,

MC
It is what it is!
Wim Van den Wyngaert
Honored Contributor
Solution

Re: OpenVMS cluster print queue setup

Q>However, when both nodes boot do both nodes need to execute the start/queue portion?

A> No. As long as the queue database is intact, none of the nodes have to do any queue setup. Only enable autostart/que is needed.

Q>Also on a system crash, the queues fail over to the other node. When the problem node comes back up does it really need to execute the start/que part?
A>Not when they are setup as autostart. Only the enable autostart/que is needed. Of course, you must make sure they are not stopped by stop/que.

Also : if you happen to have multiple system disks, make sure you spool to a shared disk.
Wim
Jan van den Ende
Honored Contributor

Re: OpenVMS cluster print queue setup

MC,

if you specify /AUTOSTART, then there are two ways to specify your nodes: simply specify "*" for your nodename. It defines ALL nodes eligable for exeution; and your que will keep running on a node, until it is forced to fail over, when it will run on another node until etc...
The other way is to specify /AUTOSTART=(node_1,node_2,...). Now the queue will try to run on the first node specified, etc.
So if you want to spread the load (if various nodes are available), then specify a portion as (node_1, node_2,...), a portion as (node_2, node_1,...) etc...
If a node fails, its queues fail over. When if comes back (more precisely, when it enables autostart), then those queues that have that node in their /AUTOSTART list ahead of the current execution node, those will fail over ( = fail back)

hth,

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Uwe Zessin
Honored Contributor

Re: OpenVMS cluster print queue setup

Haven't seen this mentioned yet, but for AUTOSTART queues to work you need to execute the following command on any node that uses queues with that feature:

$ enable autostart /queues
.
Wim Van den Wyngaert
Honored Contributor

Re: OpenVMS cluster print queue setup

Jan,

Just did a test. The failback has to be done by hand. (sorry for the bad format, they should make this form 80 char).

I am on node sbetv1.

>@wim.lis
>ty wim.lis
$ INITIALIZE /QUEUE wim -
/BATCH -
/START -
/AUTOSTART_ON = (sbetv1::,sbetv2::)
>sh que wim/fu
Batch queue WIM, idle, on SBETV1::
/AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1
/OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)
>disab auto/q
>sh que wim/fu
Batch queue WIM, idle, on SBETV2::
/AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1
/OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)
>enab auto/q
>sh que wim/fu
Batch queue WIM, idle, on SBETV2::
/AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1
/OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)
Wim
Jan van den Ende
Honored Contributor

Re: OpenVMS cluster print queue setup

Wim,

I never had to use it (on the contrary, we try to be as homogeneous as possible), but that is why I remebered the help text:

"
INIT/QUE/AUTO

... you can specify more than one node ... in the preferred order in which nodes should claim the queue.
"
So, eigther my understanding of English is not exact enough, or the help text is not clear enough, or the text is not (any more?) correct.

Sorry for any confusion I may have caused....

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Wim Van den Wyngaert
Honored Contributor

Re: OpenVMS cluster print queue setup

Jan,

No problem. But if you were right, the command had to close the queue gracefully (/next) and then move it to the other node.
Since /next can take ages ...

Wim
Wim
M C_1
Advisor

Re: OpenVMS cluster print queue setup

Jan,
I agree with Wim, the autostart queues will not fail back to the problem node after its back up. And that is not necessarily a bad thing.

The reason I revived this topic was because I was unsure of how the queue manager worked with autostart queues. I understand how to set them up, just was not clear on how VMS started them on boot up. I confirmed what Wim said in a earlier reply, that autostart queues never stop unless someone or something issues stop/queue(even on shutdown). Therefor, there is no need to start queues on the cluster during boot only Enable Auto/Que in startup.

Thanks to everyone for their input. It has been very helpful.

MC

It is what it is!
Wim Van den Wyngaert
Honored Contributor

Re: OpenVMS cluster print queue setup

MC,

If you want to do load balancing, schedule a job each day. For each queue, do a stop/que/next, wait until the queue is stopped, do a stop/que/reset (and a stop/id of the tcp symbionts because the symbion not always stops when doing stop/que) and a start/que.

This will
1) start the queue on the first node listed in the autostart list
2) reset the queue and will eliminate 80% of your printer problems (e.g. symbiont bugs, quota problems, etc)

Wim
Wim