- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: OpenVMS cluster print queue setup
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-07-2004 08:38 PM
тАО03-07-2004 08:38 PM
Re: OpenVMS cluster print queue setup
if you specify /AUTOSTART, then there are two ways to specify your nodes: simply specify "*" for your nodename. It defines ALL nodes eligable for exeution; and your que will keep running on a node, until it is forced to fail over, when it will run on another node until etc...
The other way is to specify /AUTOSTART=(node_1,node_2,...). Now the queue will try to run on the first node specified, etc.
So if you want to spread the load (if various nodes are available), then specify a portion as (node_1, node_2,...), a portion as (node_2, node_1,...) etc...
If a node fails, its queues fail over. When if comes back (more precisely, when it enables autostart), then those queues that have that node in their /AUTOSTART list ahead of the current execution node, those will fail over ( = fail back)
hth,
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-07-2004 09:13 PM
тАО03-07-2004 09:13 PM
Re: OpenVMS cluster print queue setup
$ enable autostart /queues
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-07-2004 09:49 PM
тАО03-07-2004 09:49 PM
Re: OpenVMS cluster print queue setup
Just did a test. The failback has to be done by hand. (sorry for the bad format, they should make this form 80 char).
I am on node sbetv1.
>@wim.lis
>ty wim.lis
$ INITIALIZE /QUEUE wim -
/BATCH -
/START -
/AUTOSTART_ON = (sbetv1::,sbetv2::)
>sh que wim/fu
Batch queue WIM, idle, on SBETV1::
/AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1
/OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)
>disab auto/q
>sh que wim/fu
Batch queue WIM, idle, on SBETV2::
/AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1
/OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)
>enab auto/q
>sh que wim/fu
Batch queue WIM, idle, on SBETV2::
/AUTOSTART_ON=(SBETV1::,SBETV2::) /BASE_PRIORITY=4 /JOB_LIMIT=1
/OWNER=[SYSMGR,SYSTEM] /PROTECTION=(S:M,O:D,G:R,W:S)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-08-2004 01:50 AM
тАО03-08-2004 01:50 AM
Re: OpenVMS cluster print queue setup
I never had to use it (on the contrary, we try to be as homogeneous as possible), but that is why I remebered the help text:
"
INIT/QUE/AUTO
... you can specify more than one node ... in the preferred order in which nodes should claim the queue.
"
So, eigther my understanding of English is not exact enough, or the help text is not clear enough, or the text is not (any more?) correct.
Sorry for any confusion I may have caused....
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-08-2004 03:02 AM
тАО03-08-2004 03:02 AM
Re: OpenVMS cluster print queue setup
No problem. But if you were right, the command had to close the queue gracefully (/next) and then move it to the other node.
Since /next can take ages ...
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2004 01:30 AM
тАО03-09-2004 01:30 AM
Re: OpenVMS cluster print queue setup
I agree with Wim, the autostart queues will not fail back to the problem node after its back up. And that is not necessarily a bad thing.
The reason I revived this topic was because I was unsure of how the queue manager worked with autostart queues. I understand how to set them up, just was not clear on how VMS started them on boot up. I confirmed what Wim said in a earlier reply, that autostart queues never stop unless someone or something issues stop/queue(even on shutdown). Therefor, there is no need to start queues on the cluster during boot only Enable Auto/Que in startup.
Thanks to everyone for their input. It has been very helpful.
MC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2004 06:48 PM
тАО03-09-2004 06:48 PM
Re: OpenVMS cluster print queue setup
If you want to do load balancing, schedule a job each day. For each queue, do a stop/que/next, wait until the queue is stopped, do a stop/que/reset (and a stop/id of the tcp symbionts because the symbion not always stops when doing stop/que) and a start/que.
This will
1) start the queue on the first node listed in the autostart list
2) reset the queue and will eliminate 80% of your printer problems (e.g. symbiont bugs, quota problems, etc)
Wim
- « Previous
-
- 1
- 2
- Next »