<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Shared interlocked queues in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758107#M41212</link>
    <description>Hi Brian,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; Is this actually possible or did I misread/misunderstand what the manual&lt;BR /&gt;&amp;gt;&amp;gt; was saying.&lt;BR /&gt;Using interlocked queues, you are ensuring co-ordinated access to the queues&lt;BR /&gt;by mutiple processes. You would achieve synchonization between multiple&lt;BR /&gt;processes accessing the queue at the same time.&lt;BR /&gt;&lt;BR /&gt;Check these links -&lt;BR /&gt;* Interlocks, Queues and Reentrancy? &lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_6643.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_6643.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;* LIB$INSQHI&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/82final/5932/5932pro_031.html" target="_blank"&gt;http://h71000.www7.hp.com/doc/82final/5932/5932pro_031.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;* LIB$INSQHI programs&lt;BR /&gt;&lt;A href="http://www.eight-cubed.com/examples/framework.php?file=lib_que.c" target="_blank"&gt;http://www.eight-cubed.com/examples/framework.php?file=lib_que.c&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Murali</description>
    <pubDate>Fri, 25 Feb 2011 09:16:35 GMT</pubDate>
    <dc:creator>P Muralidhar Kini</dc:creator>
    <dc:date>2011-02-25T09:16:35Z</dc:date>
    <item>
      <title>Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758106#M41211</link>
      <description>Good Morning Folks,&lt;BR /&gt;&lt;BR /&gt;I've been looking at the varius routines for interlocked queues (LIB$INSQHI, LIB$REMQHI etc) and according to the manuals they can be used for interprocess communication. Is this actually possible or did I misread/misunderstand what the manual was saying.&lt;BR /&gt;&lt;BR /&gt;cheers&lt;BR /&gt;&lt;BR /&gt;Brian</description>
      <pubDate>Fri, 25 Feb 2011 08:23:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758106#M41211</guid>
      <dc:creator>Brian Reiter</dc:creator>
      <dc:date>2011-02-25T08:23:21Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758107#M41212</link>
      <description>Hi Brian,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; Is this actually possible or did I misread/misunderstand what the manual&lt;BR /&gt;&amp;gt;&amp;gt; was saying.&lt;BR /&gt;Using interlocked queues, you are ensuring co-ordinated access to the queues&lt;BR /&gt;by mutiple processes. You would achieve synchonization between multiple&lt;BR /&gt;processes accessing the queue at the same time.&lt;BR /&gt;&lt;BR /&gt;Check these links -&lt;BR /&gt;* Interlocks, Queues and Reentrancy? &lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_6643.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_6643.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;* LIB$INSQHI&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/82final/5932/5932pro_031.html" target="_blank"&gt;http://h71000.www7.hp.com/doc/82final/5932/5932pro_031.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;* LIB$INSQHI programs&lt;BR /&gt;&lt;A href="http://www.eight-cubed.com/examples/framework.php?file=lib_que.c" target="_blank"&gt;http://www.eight-cubed.com/examples/framework.php?file=lib_que.c&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Murali</description>
      <pubDate>Fri, 25 Feb 2011 09:16:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758107#M41212</guid>
      <dc:creator>P Muralidhar Kini</dc:creator>
      <dc:date>2011-02-25T09:16:35Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758108#M41213</link>
      <description>have a global section with a shared queue manipulated by the INSQHI, REMQHI routines is certainly a way of interprocess communication.&lt;BR /&gt;&lt;BR /&gt;See also HP OpenVMS Programming Concepts Manual&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/82final/5841/5841pro.html" target="_blank"&gt;http://h71000.www7.hp.com/doc/82final/5841/5841pro.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Feb 2011 10:25:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758108#M41213</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2011-02-25T10:25:01Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758109#M41214</link>
      <description>&lt;P&gt;These interlocked queues are re-entrant and operate within shared memory global sections... A code example of queue calls here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_6984.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_6984.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;It's common to see these primitives used while maintaining work and free queues across ASTs and mainlines and across processes within global sections, for instance.&lt;BR /&gt;&lt;BR /&gt;If you're using C, there are compiler built-ins for interlocked operations that might be of interest here. (That avoids the RTL call.)&lt;BR /&gt;&lt;BR /&gt;Shared memory does not provide event notification:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_2637.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_2637.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;A common scheme has a free queue and a work queue, and ASTs flying around to field I/O into or out of buffers in a process section or in shared memory (there's not a significant difference between process and global memory) allowing you to operate without additional interlocks. It's common to see this mixed with a $hiber and $wake scheme.&lt;BR /&gt;&lt;BR /&gt;I've posted 64-bit section example C code here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://labs.hoffmanlabs.com/node/1413" target="_blank"&gt;http://labs.hoffmanlabs.com/node/1413&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Using 64-bit address space keeps (big) sections out of the very limited P0 address space on OpenVMS Alpha and OpenVMS I64.&lt;BR /&gt;&lt;BR /&gt;If you're going to roll your own shared memory, this /ASTs and interlocked queues/ scheme would be a common solution to asynchronous requirements, and not the use of attention ASTs and such. This intended to grab the data and enqueue it (or dequeue the data and transmit), rather than the extra effort and complication of an attention AST. This for design questions such as this recent example:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h30499.www3.hp.com/t5/Languages-and-Scripting/terminal-QIO-test-for-read/m-p/4758206#M8142" target="_blank"&gt;http://h30499.www3.hp.com/t5/Languages-and-Scripting/terminal-QIO-test-for-read/m-p/4758206#M8142&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;In the current era, I'd tend to look to a higher-level and preferably network-capable library or interface for resolving these and related requirements. Preferably portable, too. The memory- and cache-level interfaces are fairly fussy around alignment and atomicity and bugs tend to be obscure, there's the lack of notifications mentioned, and the single-host nature of this interface. (The interlocked calls help with this, but you're still rolling your own communications protocol.)&lt;BR /&gt;&lt;BR /&gt;Anyway, the power just failed. Again. Posting this from batteries. So this is a little terse.&lt;/P&gt;</description>
      <pubDate>Thu, 25 Aug 2011 21:10:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758109#M41214</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2011-08-25T21:10:55Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758110#M41215</link>
      <description>&lt;BR /&gt;Hi Brian, &lt;BR /&gt;&lt;BR /&gt;The prior replies the interlocked queue tools are just a small (but critical) component for in interprocess communication method using shared memory. Potentially fast, but limiting!&lt;BR /&gt;&lt;BR /&gt;What problem are you really trying to solve?&lt;BR /&gt;&lt;BR /&gt;Before you code anything please be sure to check out the totally under-recognized but very powerful OpenVMS "Intra-Cluster Communication" tools&lt;BR /&gt;&lt;BR /&gt;"Intra-cluster communication (ICC), available through ICC system services, forms an application program interface (API) for process-to-process communications. For large data transfers, intra-cluster communication is the highest performance OpenVMS application communication mechanism, better than standard network transports and mailboxes."&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/73final/5841/5841pro_008.html" target="_blank"&gt;http://h71000.www7.hp.com/doc/73final/5841/5841pro_008.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Better still, step back and articulate your need ... for speed and functionality.&lt;BR /&gt;&lt;BR /&gt;- How many message/second? &lt;BR /&gt;- How many MB/second ?&lt;BR /&gt;- Within the world? sockets!&lt;BR /&gt;- Within a cluster? ICC?, RMS shared file Records?&lt;BR /&gt;- Strictly within the node? Global section, Mailbox, RMS, &lt;BR /&gt;- Within a Numa domain? Global section&lt;BR /&gt;&lt;BR /&gt;Good luck!&lt;BR /&gt;&lt;BR /&gt;Hein van den Heuvel ( at gmail )&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
      <pubDate>Fri, 25 Feb 2011 15:18:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758110#M41215</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2011-02-25T15:18:59Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758111#M41216</link>
      <description>Brian,&lt;BR /&gt;&lt;BR /&gt;  Adding my 2c, yes it's possible to use interlocked queues for interprocess communication, but very fiddly. It's a bit like saying "This is a brick, you can use them to build a house". &lt;BR /&gt;&lt;BR /&gt;  The statement is true, but it omits to mention all the other work required to achieve the goal.&lt;BR /&gt;&lt;BR /&gt;  The basic principle is the queue is in shared memory. Part of the queue element structure is used as an interlock, the mechanics of which are hidden by the INSQHI and REMQHI routines (which are themselves really just jackets around the corresponding VAX machine language instructions - on Alpha and Integrity implemented using lower level primitives).&lt;BR /&gt;&lt;BR /&gt;  As Hein has suggested, you're probably better off using something higher up the process synchronisation food chain. Memory based mechanisms are limited to processes that can see the same memory, which puts some fairly severe constraints on scaling.&lt;BR /&gt;&lt;BR /&gt;  If you're happy to use VMS specific code, I'll second Hein's recommendation for ICC - an underrated and underused feature for building cluster wide applications. It's fast and cluster transparent, but, you have to accept that it's seriously non-portable, and a bit tricky to get your head around initially.&lt;BR /&gt;&lt;BR /&gt;  I'd suggest designing a process synchronisation layer that presents the most appropriate API for your application. Hide the details of how you impliment it from your application code, that way you can change the mechanism, if necessary, without affecting the application logic.</description>
      <pubDate>Sun, 27 Feb 2011 21:21:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758111#M41216</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2011-02-27T21:21:00Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758112#M41217</link>
      <description>Brian, what exactly are you trying to do?  Are you trying to co-ordinate access to volatile data, or trying to communicate between processes?&lt;BR /&gt;&lt;BR /&gt;Are you running on a cluster or on a standalone machine?  The latter can optionally use some simple interprocess comms that apply to a single machine rather than a cluster.&lt;BR /&gt;&lt;BR /&gt;I'm guessing that you are on a standalone machine because $INSQHI etc. don't operate across a cluster.&lt;BR /&gt;&lt;BR /&gt;Also how much data, if any, is involved? Co-ordinating access to a small amount of data can be done via some methods that aren't available to larger amounts of data.&lt;BR /&gt;&lt;BR /&gt;Please tell us more about what you are trying to do so that an appropriate course of action can be suggested.&lt;BR /&gt;</description>
      <pubDate>Sun, 27 Feb 2011 22:11:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758112#M41217</guid>
      <dc:creator>John McL</dc:creator>
      <dc:date>2011-02-27T22:11:15Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758113#M41218</link>
      <description>Brian,&lt;BR /&gt;&lt;BR /&gt;Yes, INSQHI and REMQHI do provide the tools to implement shared queues. However, for many years (actually decades) I have been recommending that they be approached with extreme caution. &lt;BR /&gt;&lt;BR /&gt;Anytime that two programs are sharing an address space, there is a serious potential for subtle and painful problems.&lt;BR /&gt;&lt;BR /&gt;As Hein and the two Johns have noted, ICC and other methods fit a wide-variety of needs. Is there really the justification for the potential hazards?&lt;BR /&gt;&lt;BR /&gt;My personal favorite is often DECnet logical links, even within a single system. They are relatively fast (even on older hardware), and they provide a pre-packaged set of mechanisms to deal with related task terminations and other events. If the needed efficiency can be obtained, there is no reason to increase complexity.&lt;BR /&gt;&lt;BR /&gt;In any event, when I implement systems, I hid that level deep under several layers of abstraction. Thus, if there is a need to change the underlying implementation, it can be done without code re-work at higher levels.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Mon, 28 Feb 2011 07:31:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758113#M41218</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2011-02-28T07:31:42Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758114#M41219</link>
      <description>Do not use ICC.  Do not use DECnet.   &lt;BR /&gt;&lt;BR /&gt;Not until after you look carefully at your options and alternatives.  &lt;BR /&gt;&lt;BR /&gt;...At higher-level middleware library (commercial and open source). &lt;BR /&gt;&lt;BR /&gt;...At a language with higher-level command and control and communications support than the socket- or channel-like primitives. &lt;BR /&gt;&lt;BR /&gt;...And at network communications with IP and sockets.&lt;BR /&gt;&lt;BR /&gt;Then look at ICC and DECnet.&lt;BR /&gt;&lt;BR /&gt;Why?  The ICC and DECnet interfaces are not portable, and these (and add IP sockets) are all low-level network interfaces.  Primitives.  Networking tosses all manner of odd timing values and random disconnections into a design. &lt;BR /&gt;&lt;BR /&gt;Longer-term, you'll need to rewrite or replicate the logic from most or all of the ICC and DECnet pieces you create here, if you want to add other hosts into your environment, or as part of a port.  &lt;BR /&gt;&lt;BR /&gt;Entirely your call and your project and your budget, of course.  But don't walk into the creation of your own low-level parallel processing and networking code thinking "don't worry, be happy" thoughts.  It is entirely possible to do this, sure.  But if there's a race or a deadlock or a cache handling error or other bug in your design, you'll almost certainly get to find it.  And some of these bugs can be really nasty to find.  &lt;BR /&gt;&lt;BR /&gt;And if you do start down this roll-your-own course, integrate debugging and tracing from the onset.&lt;BR /&gt;&lt;BR /&gt;And details including directory services and resource location also come into play, too, and particularly if you're creating an arbitrary and abstract library.  (Think DECdns, DNS or DNS-SD, for instance.)&lt;BR /&gt;&lt;BR /&gt;I've written this middleware.  Abstracting communications and using DECnet and shared memory for same- and multi-host communications.  It is entirely possible.  Realize that this effort can easily grow past a small project.  And definitely build in tracing and debugging.</description>
      <pubDate>Mon, 28 Feb 2011 13:23:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758114#M41219</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2011-02-28T13:23:55Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758115#M41220</link>
      <description>Will VMS support infiniBand (or a.n.other highspeed interconnect) and the socket(ish) API that appears to go with it?&lt;BR /&gt;&lt;BR /&gt;Discussing the outside world again sorry.&lt;BR /&gt;&lt;BR /&gt;Cheers Richard Maher</description>
      <pubDate>Mon, 28 Feb 2011 13:33:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758115#M41220</guid>
      <dc:creator>Richard J Maher</dc:creator>
      <dc:date>2011-02-28T13:33:46Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758116#M41221</link>
      <description>Hi Folks,&lt;BR /&gt;&lt;BR /&gt;Well after Mondays plumbing fiasco, I can now try and answer your questions as best I can (although the forum software makes it ticky to see the response without resorting to notepad).&lt;BR /&gt;&lt;BR /&gt;To be homest I was just curious as to how it could be done (and see if there're any examples about), the LIB$ and Programming Concepts manuals mention that it could be done but were a tad hazy about the specifics. &lt;BR /&gt;&lt;BR /&gt;The situation I have is that under extreme loads the rate of messages coming in to the system can swamp the processes. All interprocess communication is done via mailboxes and these mailboxes filling up caused delays across the board and the eventual loss of messages. This siutation is meant to be rare but it is also transient and may only last a few minutes. The system will cope until the last minute or so.&lt;BR /&gt;&lt;BR /&gt;The plan is to use ASTs and the queue routines to keep the mailboxes empty and feed the main process from the new queue. This (as long as memory allows) should allow the system to get over the processing hump without losing any data. &lt;BR /&gt;&lt;BR /&gt;Many thanks for you help - interesting as always&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Brian</description>
      <pubDate>Tue, 01 Mar 2011 15:35:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758116#M41221</guid>
      <dc:creator>Brian Reiter</dc:creator>
      <dc:date>2011-03-01T15:35:31Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758117#M41222</link>
      <description>Hmmm,&lt;BR /&gt;&lt;BR /&gt;But in a sense mailboxes ARE an in memory shared queue managed by interlocked queue instructions.&lt;BR /&gt;&lt;BR /&gt;So if you plan to keep using the mailbox as base communication and then add a layer to it, things may just get worse!&lt;BR /&gt;&lt;BR /&gt;Maybe you can just create the troublesome mailboxes with much more quota mailbox and/or check stop waiting for the message to be consumed?&lt;BR /&gt;&lt;BR /&gt;Did you check Bruce Ellis's writeup?&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/openvms/journal/v9/mailboxes.pdf" target="_blank"&gt;http://h71000.www7.hp.com/openvms/journal/v9/mailboxes.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Now the QIO mechanism is not cheap (performance). &lt;BR /&gt;&lt;BR /&gt;So if you were to replace the QIO with an alternative method then you may well come out ahead, but it may be a significant investment to get there.&lt;BR /&gt;&lt;BR /&gt;Proof for the price of a QIO? &lt;BR /&gt;Check out the NULL device. Cheap right? No!&lt;BR /&gt;The 5MB example file below takes 10x longer to copy to NL: then to a disk file.&lt;BR /&gt;(1.3 Ghz RX2600, chached file, timings in deci-seconds)&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;$ creat/fdl="file; allo 10000; reco; size 10; form fix" fix.tmp&lt;BR /&gt;% set file/end fix.tmp&lt;BR /&gt;$ @time copy fix.tmp nl:&lt;BR /&gt;Dirio=512082 Bufio=    11 Kernel= 549 RMS= 297 DCL=0 User=  37 Elapsed=   941&lt;BR /&gt;$ @time copy fix.tmp tmp.tmp&lt;BR /&gt;Dirio=   167 Bufio=    16 Kernel=   2 RMS=   0 DCL=0 User=   0 Elapsed=    87&lt;BR /&gt;$ @time copy fix.tmp nl:&lt;BR /&gt;Dirio=512082 Bufio=    11 Kernel= 534 RMS= 315 DCL=0 User=  27 Elapsed=   938&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 01 Mar 2011 16:51:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758117#M41222</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2011-03-01T16:51:42Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758118#M41223</link>
      <description>Hein,&lt;BR /&gt;&lt;BR /&gt;With all due respect, the timing comparison between NL and a disk is flawed.&lt;BR /&gt;&lt;BR /&gt;NL is a record oriented device. The disk copy is done in multiblock mode. This is comparing apples and oranges.&lt;BR /&gt;&lt;BR /&gt;What was the format of the data in the file? &lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Tue, 01 Mar 2011 19:06:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758118#M41223</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2011-03-01T19:06:13Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758119#M41224</link>
      <description>&amp;gt;&amp;gt; the timing comparison between NL and a disk is flawed.&lt;BR /&gt;&lt;BR /&gt;I was not comparing them. &lt;BR /&gt;I was just showing the price of a QIO (to the NL: device).&lt;BR /&gt;About 1 ms of kernel time per QIO !&lt;BR /&gt;A mailbox QIO takes fraction longer. See below.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; NL is a record oriented device. The disk copy is done in multiblock mode. This is comparing apples and oranges.&lt;BR /&gt;&lt;BR /&gt;Both fruits. Both copy all the data.&lt;BR /&gt;Some folks still believe using NL: speeds thinkgs up. &lt;BR /&gt;It might not... indeed because it is a record device.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; What was the format of the data in the file? &lt;BR /&gt;&lt;BR /&gt;The whole test was shown.&lt;BR /&gt;File as per FDL in the test: 512,000 records of 10 bytes.&lt;BR /&gt;&lt;BR /&gt;Silly mailbox test below.&lt;BR /&gt;It just shows that adding a post-processor to a mailbox based ommunication method is not likely to address fundamental issues.&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;&lt;BR /&gt;Hein&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;$ cre/mail my_mbx&lt;BR /&gt;$ spaw/nowait/proc=hein_mbx @time copy my_mbx: nl:&lt;BR /&gt;%DCL-S-SPAWNED, process HEIN_MBX spawned&lt;BR /&gt;$ @time copy fix.tmp my_mbx&lt;BR /&gt;Dirio=    82 Bufio=512014 Kernel= 634 RMS= 269 DCL=0 User=  46 Elapsed=  1908&lt;BR /&gt;$&lt;BR /&gt;Dirio=512001 Bufio=512011 Kernel=1142 RMS= 450 DCL=0 User=  35 Elapsed=n.a.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 01 Mar 2011 19:25:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758119#M41224</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2011-03-01T19:25:57Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758120#M41225</link>
      <description>Ooops, too quickly. &lt;BR /&gt;Got my math wrong.&lt;BR /&gt;It didn't feel right, but needed to get back to work.&lt;BR /&gt;&lt;BR /&gt;Dirio=512082 ...  Kernel= 534 Deci-second = 5340 ms.&lt;BR /&gt;&lt;BR /&gt;So that is about 100 QIOs per ms.&lt;BR /&gt;Or 10 micro-second per QIO. &lt;BR /&gt;Much better!&lt;BR /&gt;Sorry.&lt;BR /&gt;&lt;BR /&gt;Hein&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 01 Mar 2011 19:45:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758120#M41225</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2011-03-01T19:45:40Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758121#M41226</link>
      <description>Brian,&lt;BR /&gt;&lt;BR /&gt;  As Hein has pointed out, a mailbox is pretty much what you're talking about implementing. Indeed, if you can find the sources for the mailbox driver, I'd expect you'll find some excellent examples of how to use the INSQHI and REMQHI instructions ;-)&lt;BR /&gt;&lt;BR /&gt;  If you're having trouble dealing with spikes in load, make sure your mailboxes have plenty of headroom. See "bufquo" parameter for $CREMBX. This used to be limited to absurdly low values (64K?), but since circa V7.3 it's now 32 bit, limited only by process BYTLM and system NPAGEDYN.&lt;BR /&gt;&lt;BR /&gt; If your $CREMBX doesn't specify a buffer quota, it inherits DEFMBXBUFQUO which defaults to 1K (yes, *K*). If you were going to just shovel the messages out of a mailbox and into your own mailbox, you may as well make room for them in the system mailbox and save yourself the work. The only caveat is the system allocates mailboxes from NPAGEDYN, so the resource isn't quite as cheap as pageable virtual memory.&lt;BR /&gt;&lt;BR /&gt;  That said. If you have a chain of processes that pass messages through mailboxes, as you've no doubt discovered, you don't want them synchronising with RWMBX (very expensive!).&lt;BR /&gt;&lt;BR /&gt;  The most obvious process design:&lt;BR /&gt;&lt;BR /&gt;loop&lt;BR /&gt;  $QIOW mailbox READVBLK into buffer&lt;BR /&gt;  process buffer&lt;BR /&gt;endloop&lt;BR /&gt;&lt;BR /&gt;  can be a problem if processing potentially exceeds message interarrival time. You can move the spikes from the mailbox into local process virtual memory using a work queue design. It then becomes two threads, like this:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;MailboxRead&lt;BR /&gt;  $QIO mailbox READVBLK into buffer AST MailboxAST&lt;BR /&gt;End MailboxRead&lt;BR /&gt;&lt;BR /&gt;MailboxAST&lt;BR /&gt;  put buffer onto work queue&lt;BR /&gt;  MailboxRead&lt;BR /&gt;  $WAKE&lt;BR /&gt;End MailboxAST&lt;BR /&gt;&lt;BR /&gt;MAIN&lt;BR /&gt;  MailboxRead&lt;BR /&gt;  Loop&lt;BR /&gt;    $HIBER&lt;BR /&gt;    $SETAST 0 ! block ASTs&lt;BR /&gt;      remove buffer from work queue&lt;BR /&gt;    $SETAST restore&lt;BR /&gt;    If gotbuffer THEN process buffer&lt;BR /&gt;  Endloop&lt;BR /&gt;&lt;BR /&gt;Note there's no need for the work queue to be in shared memory and you don't need to use INSQHI/REMQHI, as you're using AST blocks to synchronise the threads. The AST thread can interrupt processing to add more buffers to the work queue.&lt;BR /&gt;</description>
      <pubDate>Tue, 01 Mar 2011 21:17:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758121#M41226</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2011-03-01T21:17:53Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758122#M41227</link>
      <description>I think there may have been an example of this in some VMS documentation many years ago.  The example might have been about an airline reservation system.  I'm thinking the documentation could have been around 1985-88 timeframe because I used the concept at a site that I worked at in late 1989. &lt;BR /&gt;&lt;BR /&gt;The principles of that code are as follows:&lt;BR /&gt;&lt;BR /&gt;Start by creating a mailbox, grabbing a bunch of buffers and putting them on the "free" list, then set a QIO read on the mailbox using one of the free buffers and with an AST routine.&lt;BR /&gt; &lt;BR /&gt;The QIO AST routine puts the mailbox buffer onto an "active" list then sets the new QIO AST (using one of the "free" buffers) before exiting.&lt;BR /&gt;&lt;BR /&gt;The main code takes the next buffer off the active list, processes it and then puts the used buffer onto the free list.&lt;BR /&gt;&lt;BR /&gt;You'll see some similarity between the initial code and the AST code but because the AST code operates after that initial code (i.e. there's no chance of simultaneous access) there's no reason why the same routine - expand freelist if required, take buffer from freelist, set-up QIO with AST - can't be used for both.&lt;BR /&gt;&lt;BR /&gt;I used LIB$INSQTI (NB. "tail") to handle putting buffers onto the two lists (and LIB$REMQHI to take them off) because I didn't want the main code halfway through putting a buffer onto the free list when the AST routine jumped in and wanted to take a buffer off that list or the other way around with the active list. (As John G says, the other way to do this is to have the main code disable ASTs while taking buffers off lists or putting them on.)</description>
      <pubDate>Tue, 01 Mar 2011 21:47:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758122#M41227</guid>
      <dc:creator>John McL</dc:creator>
      <dc:date>2011-03-01T21:47:28Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758123#M41228</link>
      <description>"...extreme loads...mailboxes filling up..."&lt;BR /&gt;&lt;BR /&gt;Means your configuration has not enough umps to handle the burst. Find out which part of your configuration is the (current) bottleneck. CPU? I/O? Artifiial waits?&lt;BR /&gt;&lt;BR /&gt;"...loss of messages."&lt;BR /&gt;Not caused by the OS's mailbox facility. Your application has a logic error to cause this.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Tue, 01 Mar 2011 23:38:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758123#M41228</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2011-03-01T23:38:18Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758124#M41229</link>
      <description>This reeks of an underpowered and/or overloaded system, and a case where latent application bugs are revealed by the load.&lt;BR /&gt;&lt;BR /&gt;Use DECset PCA to profile the application code, looking for wall-clock and processor usage.  (I'd probably also look at bigger hunks of data; processing individual records from a mailbox or from a file is a slow technique.)&lt;BR /&gt;&lt;BR /&gt;Look for synchronization and coding bugs.  Omitting IOSBs or mishandling IOSBs and omitting return and IOSB status checks are a common trigger of these cases.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_1661.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_1661.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Mechanisms that provide guaranteed message delivery can generate application-wide wedgies, too :-) and this when the slowest part of the application configuration is overrun.  Your job: find the message loss (that's likely a bug), and find the slowest part of the application.</description>
      <pubDate>Wed, 02 Mar 2011 00:37:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758124#M41229</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2011-03-02T00:37:04Z</dc:date>
    </item>
    <item>
      <title>Re: Shared interlocked queues</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758125#M41230</link>
      <description>Hi Brian,&lt;BR /&gt;&lt;BR /&gt;For those cases (and there are many) when the consumer just cannot keep up with the producer I/we have opted for the lightweight write-to-disk consumer that does nothing more than record/persist the phone-call/transaction/trade and maintain the last-available number in a lock-value-block.&lt;BR /&gt;&lt;BR /&gt;The ultimate down-stream consumer can then read sequentially through this work-queue that can cater for the highest peaks and deepest troughs.&lt;BR /&gt;&lt;BR /&gt;Mailboxes are very limited! Interactive users are fine but PABX or Switch traffic is too much. Horses for courses. Just make sure the event (trade, txn, call) is persistent (unlike the ASX :-) and everything else is lazyable.&lt;BR /&gt;&lt;BR /&gt;Cheers Richard Maher</description>
      <pubDate>Wed, 02 Mar 2011 11:57:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/shared-interlocked-queues/m-p/4758125#M41230</guid>
      <dc:creator>Richard J Maher</dc:creator>
      <dc:date>2011-03-02T11:57:22Z</dc:date>
    </item>
  </channel>
</rss>

