<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: $WAITFR behaviour in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076818#M38709</link>
    <description>Hein, maybe I should have said that Claus had solved the problem of the slaves not seeing the setting of the event flag -- I don't mean to say that this is a solution for my application.&lt;BR /&gt;&lt;BR /&gt;It /could/ be a solution, as some missed frames may not be critical - the slaves are intended to run at 50Hz (20ms cycle time) as their position info is monitored by a graphics process (running on a pc). (Currently, the monolithic application does all that the slaves do in this period.) I have not yet run at this rate or given the slaves any work to do; this may then show up the flakiness of the scheduling problem again...&lt;BR /&gt;&lt;BR /&gt;If so, I would probably try the $HIBER/$WAKE solution suggested by Hein; then I have control of the "wakeup list" of process IDs, rather than depending on what I thought $WAITFR would do. (I will need a mechanism for the slaves to register their process ID with the master.)&lt;BR /&gt;&lt;BR /&gt;Finally, I may also try setting the slaves free and run them at 50Hz but asynchronously to each other. This may be no worse than tolerating them missing a cyle when running under the master.</description>
    <pubDate>Tue, 30 Oct 2007 06:54:59 GMT</pubDate>
    <dc:creator>Barry Alford</dc:creator>
    <dc:date>2007-10-30T06:54:59Z</dc:date>
    <item>
      <title>$WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076802#M38693</link>
      <description>&lt;!--!*#--&gt;I am trying to synchronize two processes using event flags. One process (the Master) runs a wait loop using one event flag in a timer ($SETIMR). Once the timer event flag is set, it is cleared and another event flag is set at the end of the loop. This second flag is then cleared at the top of the loop and the timer is reset:&lt;BR /&gt;&lt;BR /&gt;repeat&lt;BR /&gt;  $SETTMR(timerFlag, cycleTime)&lt;BR /&gt;  $CLREF(eventFlag)&lt;BR /&gt;  $WAITFR(timerFlag)&lt;BR /&gt;  $CLREF(timerFlag)&lt;BR /&gt;  $SETEF(eventFlag)&lt;BR /&gt;&lt;BR /&gt;This is intended to form an "escape mechanism" - the two flags are never both true.&lt;BR /&gt;&lt;BR /&gt;A slave process implements this loop:&lt;BR /&gt;&lt;BR /&gt;repeat&lt;BR /&gt;  $WAITFR(timerFlag)&lt;BR /&gt;  $WAITFR(eventFlag)&lt;BR /&gt;&lt;BR /&gt;Which is intended to keep time with the master process. This relies on $WAITFR having this (documented) behaviour:&lt;BR /&gt;"$WAITFR - Tests a specific event flag and returns immediately if the flag is set; otherwise, the process is placed in a wait state until the event flag is set."&lt;BR /&gt;&lt;BR /&gt;I took that to mean that when the flag is set, the process will definitely be woken up and run and see the flag as set. However, I do not see this happening; it seems that because the flags are only set for a short period that these events are "lost" and the slave processes run very erratically (&amp;gt;1 in 10 master cycles) as if the flags are being polled by the processes and not being triggered by an event from the OS.&lt;BR /&gt;&lt;BR /&gt;Have I missed something here?&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 06:46:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076802#M38693</guid>
      <dc:creator>Barry Alford</dc:creator>
      <dc:date>2007-10-29T06:46:09Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076803#M38694</link>
      <description>Assuming that timerFlag and eventFlag are in a common efn cluster, then I would expect this to work.  If you find that it is erratic, then I suspect there is a bug in the version of VMS that you are using.&lt;BR /&gt;&lt;BR /&gt;What version of VMS are you using?&lt;BR /&gt;Are all the patches installed?&lt;BR /&gt;Is this a multiprocessor system?&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 08:27:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076803#M38694</guid>
      <dc:creator>Richard Whalen</dc:creator>
      <dc:date>2007-10-29T08:27:20Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076804#M38695</link>
      <description>My we assume these are some sort of common flags?&lt;BR /&gt;Is this a new design? Testing on a multi-cpu system?&lt;BR /&gt;&lt;BR /&gt;&amp;gt;. Have I missed something here?&lt;BR /&gt;&lt;BR /&gt;Yes, you missed a glaring timing window.&lt;BR /&gt;Once the timer flag is set, the slave is officialy runnable.&lt;BR /&gt;The waitfr time is done, and the waitfr other is about to be requested.&lt;BR /&gt;But in the mean time the master sets the other, arms the timer, clears the other and waits for the timer.&lt;BR /&gt;The scheduler now starts workgin on the slave.&lt;BR /&gt;It finally actually  executes the waitfr other, which at this point is cleared. So it really waits for the timer.&lt;BR /&gt;Once cycle missed!&lt;BR /&gt;&lt;BR /&gt;KISS!&lt;BR /&gt;&lt;BR /&gt;One event flag is genrally too hard to deal with already.&lt;BR /&gt;Be sure to check out $HIBER / $WAKE instead.&lt;BR /&gt;Unlike event flags the pending wakes are rememberd.&lt;BR /&gt;Much easier!&lt;BR /&gt;&lt;BR /&gt;Good luck,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 08:36:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076804#M38695</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-10-29T08:36:13Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076805#M38696</link>
      <description>I have tried it on v7.2-1 and v7.3-2 (both on DS10 single processor) so far and  about to try v8.1 (PersonalAlpha). All will be pretty much unpatched :-o. (I am reluctant to believe that something as fundamental as this would be broken!)&lt;BR /&gt;&lt;BR /&gt;I am using a common event cluster:&lt;BR /&gt;[in Fortran]&lt;BR /&gt;$ASCEFC(%VAL(iEflag), %DESCR("TIMER"), %VAL(0), %VAL(0))&lt;BR /&gt;...for both flags (69 &amp;amp; 70 in fact)&lt;BR /&gt;&lt;BR /&gt;Hein, I see your point but I think that would make the slave miss at most one firing of the eventFlag. Consider the flags are doors into and out of a room - once the slave enters the  room (on the timerFlag) it's waiting for the exit to open (on the eventFlag). Meanwhile, the master has opened the exit many times but the slave doesn't come out!&lt;BR /&gt;&lt;BR /&gt;The problem of using $HIBER/$WAKE is that slaves will have to register with the master to get woken up; I wanted to keep things more adhoc... &lt;BR /&gt;&lt;BR /&gt;(The processes will, in fact, map to  a shared region of memory, but I wanted to find a general algorithm. Back to my old college text books and refresh my hazy memories of  p &amp;amp; v and co-routines?)&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 09:22:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076805#M38696</guid>
      <dc:creator>Barry Alford</dc:creator>
      <dc:date>2007-10-29T09:22:32Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076806#M38697</link>
      <description>Yabut... the one missed 'other wait' is but the beginning. The slave will also miss a timer event, while waiting for the other flag. So that's 2!&lt;BR /&gt;&lt;BR /&gt;Be happy that you tested this on a single CPU system. You might not have found the design problem on a multi-cpu system untill way too late, but it would have been equally broken!&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; (I am reluctant to believe that something as fundamental as this would be broken!)&lt;BR /&gt;&lt;BR /&gt;Ah, give yourself 8 points!&lt;BR /&gt;That would have been 10 points if you had written 'refuse to believe'.&lt;BR /&gt;[Yeah, I know you can not give points to yourself]&lt;BR /&gt;&lt;BR /&gt;When reading the base topic, I half expected to read 'waitfr' is broken, and was pleased to see that was not mentioned but replaced by an 'have I missed something'. Excellent.&lt;BR /&gt;Now I see it was fully intentional and it pleases me. &lt;BR /&gt;There are too many daft individuals out there that think their first dables must have uncovered a major flaw in fundamental stuff. Not!&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 09:36:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076806#M38697</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-10-29T09:36:41Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076807#M38698</link>
      <description>Ah Hein! I'm not awarding any points just yet!&lt;BR /&gt;&lt;BR /&gt;Well, well! When I monitor the slave process with:&lt;BR /&gt;$ SHOW/PROC/CONT/ID=&lt;SLAVEID&gt;&lt;BR /&gt;.. it all works _perfectly_! Stop monitoring, and it all goes sticky again.&lt;BR /&gt;&lt;BR /&gt;How d'you like them apples? :-)&lt;/SLAVEID&gt;</description>
      <pubDate>Mon, 29 Oct 2007 10:54:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076807#M38698</guid>
      <dc:creator>Barry Alford</dc:creator>
      <dc:date>2007-10-29T10:54:14Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076808#M38699</link>
      <description>Have you missed something?  Yes, you've missed that event flags are an OpenVMS analog of "die Lorelei", or of Homer's Sirens.  A construct that serves to lure unsuspecting programmers onto the rocks of pain and suffering.   By sheer coincidence, I posted up a similar statement to this one -- and a description of why you're headed for the rocks -- just last night.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://64.223.189.234/node/613" target="_blank"&gt;http://64.223.189.234/node/613&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Event flags only look simple.  They can get to be very nasty, this in terms of spurious triggers, problems with scaling, limits on the numbers of parallel events, and otherwise.&lt;BR /&gt;&lt;BR /&gt;On no details on the application, I might tend to use locks and potentially lock value blocks here.  Mayhap shared memory.  I'd work to keep time with the master, whatever that means here -- and some details and some background on the application synchronization requirements would be useful.&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs LLC&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 11:34:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076808#M38699</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-10-29T11:34:27Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076809#M38700</link>
      <description>Thank, Hoff, for the warning. I will look up that link tonight (restricted access here).&lt;BR /&gt;&lt;BR /&gt;All I asked was for clarification of how $WAITFR worked; it seems from your words and me scraping my ship on the rocks that the Land of Event Flags is not the place for me!&lt;BR /&gt;&lt;BR /&gt;The application does simulation of various machines; currently each machine is processed serially in a time step. This makes changes a problem in that the whole app has to be rebuilt. We have toyed with shared libraries and late binding, and my aim with this exercise is to experiment with multiple processes, each simulating one machine, but running in step with each other in time. (Did someone say "threads" ot there?)&lt;BR /&gt;&lt;BR /&gt;Anyhow, I will now try Plan B: use the master timer to wake up processes, then a cycle number in shared memory to ensure only one processing step per master cycle.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 12:05:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076809#M38700</guid>
      <dc:creator>Barry Alford</dc:creator>
      <dc:date>2007-10-29T12:05:48Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076810#M38701</link>
      <description>Barry,&lt;BR /&gt;&lt;BR /&gt;  I agree with Hein and Hoff. Event flags are are very hard to get right. They tend to have nasty timing windows, and because there's so few of them they often get overloaded, so you have to deal with spurious wakeups.&lt;BR /&gt;&lt;BR /&gt;  Consider using a pair of locks, maybe called TICK and TOCK. You can lock step your processes by cycling the locks converting to EX then NL in sequence. Put your cycle number in the lock value block.&lt;BR /&gt;&lt;BR /&gt;  Now, since your locks are exclusive to the specific pair of processes you avoid any logic for spurious "wakes", and you're guaranteed handshaking. Moreover, since they're locks, the mechanism will work across a cluster (and it can be scaled up to multiple processes fairly easily - just add another TICK for each slave). With some extra logic on the lock value block, you could also build in a way to monitor the presence of the other process.&lt;BR /&gt;&lt;BR /&gt;Master&lt;BR /&gt;repeat&lt;BR /&gt;  $ENQ TOCK CVT-&amp;gt;EX ; wait for slave&lt;BR /&gt;  $ENQ TICK CVT-&amp;gt;EX ; block slave&lt;BR /&gt;  $SETIMR&lt;BR /&gt;  wait for timer&lt;BR /&gt;  $ENQ TICK CVT-&amp;gt;NL ; release timer&lt;BR /&gt;  prepare for slave to be released&lt;BR /&gt;  $ENQ TOCK CVT-&amp;gt;NL ; release slave&lt;BR /&gt;  slave is now executing&lt;BR /&gt;next&lt;BR /&gt;&lt;BR /&gt;Slave&lt;BR /&gt;repeat&lt;BR /&gt;  $ENQ TICK CVT-&amp;gt;EX ; wait for timer&lt;BR /&gt;  timer complete&lt;BR /&gt;  $ENQ TOCK CVT-&amp;gt;EX ; wait for master&lt;BR /&gt;    do something&lt;BR /&gt;  $ENQ TOCK CVT-&amp;gt;NL ; signal complete&lt;BR /&gt;  $ENQ TICK CVT-&amp;gt;NL ; &lt;BR /&gt;next</description>
      <pubDate>Mon, 29 Oct 2007 16:22:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076810#M38701</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-10-29T16:22:33Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076811#M38702</link>
      <description>Barry,&lt;BR /&gt;&lt;BR /&gt;I would agree with John, except it is not clear to me that you actually need two locks to accomplish this.&lt;BR /&gt;&lt;BR /&gt;Personally, I would do this with locking and ASTS to make things maximally safe.&lt;BR /&gt;&lt;BR /&gt;Doing this with event flags is tricky, as has been commented on.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Mon, 29 Oct 2007 17:47:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076811#M38702</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-10-29T17:47:17Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076812#M38703</link>
      <description>To add one other oft-overlooked interprocess communications mechanism that's available on OpenVMS: RMS.  RMS can be a silly-fast communications channel for many applications, and with very minimal configuration requirements.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 18:19:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076812#M38703</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-10-29T18:19:48Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076813#M38704</link>
      <description>To me it looks like a problem of who gets the cpu. OpenVMS will put the slaves on the runable queue when the master sets the ef but the slaves may not actually get the cpu for while after that. All while the master is free to continue in its endlessly loop of setting and clearing the efs whether or not the slaves have seen it. You may not be able to use realtime priorities but if the master could be set at a lower realtmie priority than the slaves then that could ensure that the slaves gets to run all the way up to the next eventflag before the master again gets the cpu.</description>
      <pubDate>Mon, 29 Oct 2007 21:08:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076813#M38704</guid>
      <dc:creator>Claus Olesen</dc:creator>
      <dc:date>2007-10-29T21:08:02Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076814#M38705</link>
      <description>Absolutely Claus.&lt;BR /&gt;&lt;BR /&gt;That's a much better and simpler way to explain it. Occam's razor.&lt;BR /&gt;As presented the master never waits for the slave.&lt;BR /&gt;There is no formal synchronization at all.&lt;BR /&gt;It's just 'likely' that the slave gets scheduled when the master goes to wait for the timer, but it is not garantueed.&lt;BR /&gt;&lt;BR /&gt;The $show proc/cont just changed the priorities and scheduling.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 21:34:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076814#M38705</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-10-29T21:34:56Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076815#M38706</link>
      <description>Re: Robert "except it is not clear to me that you actually need two locks to accomplish this."&lt;BR /&gt;&lt;BR /&gt;If there are only two processes, you're absolutely correct. One lock will suffice, but it seems to me there may be several "Slaves" here. For maximum control, and to guarantee each slave gets a look in, have one lock for the master, and one for each slave. The slaves synch against master and their own lock. The master controls them all.&lt;BR /&gt;&lt;BR /&gt;re: Hoff's comment about RMS - absolutely true. RMS files can be a simple way of using locks from DCL.</description>
      <pubDate>Mon, 29 Oct 2007 22:22:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076815#M38706</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-10-29T22:22:40Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076816#M38707</link>
      <description>Thanks to all for your replies -- I didn't think that this would elicit such a discussion; I thought I had just misinterpreted how event flags, and especially $WAITFR, works.&lt;BR /&gt;&lt;BR /&gt;Claus solved the problem by mentioning priorities; if I run the master at base priority 3 and the slaves at the default of 4 (tried it with two slaves so far). Hein suggested that the $show proc/cont changed the priorities, and this looks like it may be the case - but (like in quantum physics) I can't measure this without changing it!&lt;BR /&gt;&lt;BR /&gt;I have not tried locks/ASTs as suggested by John and Robert; I may explore that later.&lt;BR /&gt;&lt;BR /&gt;By the way, my Windows programming collegues tell me this is really simple in their world! However, I believe their event driven code is like the slaves registering with the master which Hein suggested with his $HIBER/$WAKE, albeit with some more help from the OS.</description>
      <pubDate>Tue, 30 Oct 2007 03:56:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076816#M38707</guid>
      <dc:creator>Barry Alford</dc:creator>
      <dc:date>2007-10-30T03:56:51Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076817#M38708</link>
      <description>&amp;gt;&amp;gt; Claus solved the problem by mentioning priorities; if I run the master at base priority 3 and the slaves at the default of 4 (tried it with two slaves so far). H&lt;BR /&gt;&lt;BR /&gt;NO NO NO NO NO&lt;BR /&gt;&lt;BR /&gt;Assining priorities does not solve problems it merely  ides problems.&lt;BR /&gt;&lt;BR /&gt;IF strict sncrhonization is a requirement, then the master somehow has to listen to the slave.&lt;BR /&gt;&lt;BR /&gt;IF it's ok to miss a cycle or two then priorities can be good enough, but they are never a solution.&lt;BR /&gt;For example, at some point the maste may do an IO an get a priority  boost from that. Or pixscan kicks in or whatever.&lt;BR /&gt;&lt;BR /&gt;Since you have a time goin of anyway this could be acceptable in your case, as long as slaves look for all possible work, not just 1 item.&lt;BR /&gt;The folks with just one event event flag and no time may hang too long waiting for the action if a wakeup is missed.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Oct 2007 06:11:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076817#M38708</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-10-30T06:11:22Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076818#M38709</link>
      <description>Hein, maybe I should have said that Claus had solved the problem of the slaves not seeing the setting of the event flag -- I don't mean to say that this is a solution for my application.&lt;BR /&gt;&lt;BR /&gt;It /could/ be a solution, as some missed frames may not be critical - the slaves are intended to run at 50Hz (20ms cycle time) as their position info is monitored by a graphics process (running on a pc). (Currently, the monolithic application does all that the slaves do in this period.) I have not yet run at this rate or given the slaves any work to do; this may then show up the flakiness of the scheduling problem again...&lt;BR /&gt;&lt;BR /&gt;If so, I would probably try the $HIBER/$WAKE solution suggested by Hein; then I have control of the "wakeup list" of process IDs, rather than depending on what I thought $WAITFR would do. (I will need a mechanism for the slaves to register their process ID with the master.)&lt;BR /&gt;&lt;BR /&gt;Finally, I may also try setting the slaves free and run them at 50Hz but asynchronously to each other. This may be no worse than tolerating them missing a cyle when running under the master.</description>
      <pubDate>Tue, 30 Oct 2007 06:54:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076818#M38709</guid>
      <dc:creator>Barry Alford</dc:creator>
      <dc:date>2007-10-30T06:54:59Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076819#M38710</link>
      <description>Why are the secondaries even having this problem?  &lt;BR /&gt;&lt;BR /&gt;(There's an ancient software design rule I ascribe to: if it hurts, don't do it.)  &lt;BR /&gt;&lt;BR /&gt;Consider another approach...  Have a scheduled AST or $schdwk or other such in each of the secondaries (repeating TQE), and free-run the secondaries pulling in the frames from whatever the shared storage is.&lt;BR /&gt;&lt;BR /&gt;The other approach I've used in this environment is an Ethernet multicast.  I have used 60-06 here when I first started with this scheme an eon ago (as that stays on-LAN), but I'd also look to use an UDP multicast datagram for various environments.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/DOC/82final/6529/6529pro_005.html" target="_blank"&gt;http://h71000.www7.hp.com/DOC/82final/6529/6529pro_005.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Basically, the primary multicasts the data (or just a ping) periodically, and if the secondaries pick up on the multicast, they display the data.  So long as the next datagram is captured and the data in the datagrams are independent, if a UDP multicast datagram is occasionally dropped on the floor, well, oh, well...  &lt;BR /&gt;&lt;BR /&gt;I've successfully used this or a similar scheme for any number of monitoring tasks over the years.  (It's also entirely platform-independent; it works on OpenVMS and on anything else that can receive a UDP multicast.)  (If you want to chat off-line, I can describe a boo-boo or two I've made when designing these sorts of factory-floor and other such comm systems over the years.)&lt;BR /&gt;&lt;BR /&gt;As for coordination and the election of a primary process, that should use locks.  Coordinating processes within a cluster through other means is perilous at best, and a whole lot more work to get right.  I'd not use $hiber and $wake for this task, and would definitely not use event flags here.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Oct 2007 12:07:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076819#M38710</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-10-30T12:07:38Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076820#M38711</link>
      <description>Barry,&lt;BR /&gt;&lt;BR /&gt;  Fairly simple rule... any synchronization mechanism that depends on differential priorities is wrong! It WILL break somewhere, sometime, when you least expect it, and you won't be able to reproduce the failure.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;By the way, my Windows programming &lt;BR /&gt;&amp;gt;collegues tell me this is really simple in &lt;BR /&gt;&amp;gt;their world! &lt;BR /&gt;&lt;BR /&gt;  Synchronization between processes in OpenVMS is also really simple. Any operating system is fundamentally just a big synchronization engine, so it can't be any other way.&lt;BR /&gt;&lt;BR /&gt; I suppose it could be argued that on OpenVMS it's made slightly harder because you have so much choice as to what mechanism you pick. So, rather than having to bend your design to fit what the operating system offers, you choose the best design for your application and then select the most appropriate mechanism to implement it.&lt;BR /&gt;&lt;BR /&gt;  Event flags can be good for "blind" signaling. "Wait here until something happens". There can be multiple processes all waiting on a common event flag, and they all get released at once, BUT you get no feedback, and no way to know all processes are ready before lowering the gate. They can be a good, light weight mechanism, but unless you're very careful you may introduce timeing windows, race conditions or false triggers.&lt;BR /&gt;&lt;BR /&gt;  Other mechanisms include HIBER/WAKE, locks, mailboxes, global sections, ICC, IPL, spinlocks, mutexes, ASTs, etc... See the Programming Concepts Manual. &lt;BR /&gt;&lt;BR /&gt;  Your first step is to make sure you can clearly describe what you want to do. I'm not sure I know that yet. I THINK it's like this:&lt;BR /&gt;&lt;BR /&gt;master process&lt;BR /&gt;  timed loop&lt;BR /&gt;    send an event to each slave&lt;BR /&gt;    confirm slave has received event&lt;BR /&gt;  endloop&lt;BR /&gt;&lt;BR /&gt;slave (multiple processes?)&lt;BR /&gt;  loop&lt;BR /&gt;    wait for event&lt;BR /&gt;    confirm receipt of event&lt;BR /&gt;  endloop&lt;BR /&gt;&lt;BR /&gt;There are lots of possible ways to do this, but event flags probably aren't the best choice.</description>
      <pubDate>Tue, 30 Oct 2007 16:21:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076820#M38711</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-10-30T16:21:35Z</dc:date>
    </item>
    <item>
      <title>Re: $WAITFR behaviour</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076821#M38712</link>
      <description>Hoff suggested using locks. I tried it using this&lt;BR /&gt;&lt;BR /&gt;  lksb A,B,C;&lt;BR /&gt;&lt;BR /&gt;  if (!strcmp(argv[1],"master"))&lt;BR /&gt;    mode=LCK$K_EXMODE;&lt;BR /&gt;  else      //slaves&lt;BR /&gt;    mode=LCK$K_CRMODE;&lt;BR /&gt;&lt;BR /&gt;  for (x=0;;x++)&lt;BR /&gt;  {&lt;BR /&gt;    printf("%d\n",x);&lt;BR /&gt;    sys_enq(A,mode);&lt;BR /&gt;    sys_enq(C,LCK$K_NLMODE);&lt;BR /&gt;    sys_enq(B,mode);&lt;BR /&gt;    sys_enq(A,LCK$K_NLMODE);&lt;BR /&gt;    sys_enq(C,mode);&lt;BR /&gt;    sys_enq(B,LCK$K_NLMODE);&lt;BR /&gt;    sleep(atoi(argv[2])); //your work here&lt;BR /&gt;  }&lt;BR /&gt;&lt;BR /&gt;(sys_enq is pseudo for your convenience wrapper of sys$enq) and my test run with one master and 2 slaves showed lock step without missteps. It has that characteristic that you mentioned that the parties do not need to know about one another. And they can come and go.</description>
      <pubDate>Tue, 30 Oct 2007 20:59:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/waitfr-behaviour/m-p/5076821#M38712</guid>
      <dc:creator>Claus Olesen</dc:creator>
      <dc:date>2007-10-30T20:59:33Z</dc:date>
    </item>
  </channel>
</rss>

