<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How wide is SYS$SETRWM in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231287#M44304</link>
    <description>AFAIK, everything.&lt;BR /&gt;&lt;BR /&gt;This is a arguably nuclear application self-destruct lever outside of an application where you control everything -- and those basically don't exist any more.  You're almost inevitably calling into an RTL somewhere.&lt;BR /&gt;&lt;BR /&gt;If you'd like to risk weird and potentially unhandled errors within most anything you might call most anywhere in the call chain (directly or indirectly, too), have at.&lt;BR /&gt;&lt;BR /&gt;Now if you'd like to discuss the real problem here -- that there's a nasty little trade-off where you inevitably have to decide to hang or to drop a packet when operating in traffic spikes or otherwise past the application throughput limits -- that's another matter.&lt;BR /&gt;&lt;BR /&gt;I'd tend to look to design such an application to have enough quotas to avoid running into the quota blade guard; the process quotas are the means by which an application error is (usually) prevented from triggering or escalating into a system-wide failure.&lt;BR /&gt;&lt;BR /&gt;Yep, probably not the answer you wanted.  :-)&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs LLC</description>
    <pubDate>Thu, 10 Jul 2008 15:59:41 GMT</pubDate>
    <dc:creator>Hoff</dc:creator>
    <dc:date>2008-07-10T15:59:41Z</dc:date>
    <item>
      <title>How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231286#M44303</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;My question reside on how wide is the SYS$SETRWM, does it effect the SYS$ENQW?&lt;BR /&gt;Can it effect other programs started from same GROUP/SCHEDULER etc..........</description>
      <pubDate>Thu, 10 Jul 2008 13:17:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231286#M44303</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-10T13:17:52Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231287#M44304</link>
      <description>AFAIK, everything.&lt;BR /&gt;&lt;BR /&gt;This is a arguably nuclear application self-destruct lever outside of an application where you control everything -- and those basically don't exist any more.  You're almost inevitably calling into an RTL somewhere.&lt;BR /&gt;&lt;BR /&gt;If you'd like to risk weird and potentially unhandled errors within most anything you might call most anywhere in the call chain (directly or indirectly, too), have at.&lt;BR /&gt;&lt;BR /&gt;Now if you'd like to discuss the real problem here -- that there's a nasty little trade-off where you inevitably have to decide to hang or to drop a packet when operating in traffic spikes or otherwise past the application throughput limits -- that's another matter.&lt;BR /&gt;&lt;BR /&gt;I'd tend to look to design such an application to have enough quotas to avoid running into the quota blade guard; the process quotas are the means by which an application error is (usually) prevented from triggering or escalating into a system-wide failure.&lt;BR /&gt;&lt;BR /&gt;Yep, probably not the answer you wanted.  :-)&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs LLC</description>
      <pubDate>Thu, 10 Jul 2008 15:59:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231287#M44304</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-07-10T15:59:41Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231288#M44305</link>
      <description>Roger,&lt;BR /&gt;&lt;BR /&gt;  Whatever you think SYS$SETRWM will do for you, it won't! &lt;BR /&gt;&lt;BR /&gt;  You should only turn off RWM if you know the entire instruction stream up to the point it's turned back on. When it's used correctly it would be for a short duration critical region. I'd go so far as to say there are NO valid applications where RWM is disabled permanently at the process level.&lt;BR /&gt;&lt;BR /&gt;  Yes, it affects all system services, including SYS$ENQ(W). If the $ENQ request requires allocation of a resource which cannot be satisfied immediately, RWM determines behaviour.&lt;BR /&gt;&lt;BR /&gt;  Example - an $ENQ for a new resource will need to allocate a RSB and LKB from non-paged pool. With RWM enabled, $ENQ will wait if the resource isn't available, with it disabled $ENQ will return immediately with some kind of failure status.&lt;BR /&gt;&lt;BR /&gt;  If there are no resource issues, $ENQ(W) will behave in exactly the same manner with RWM on or off. RWM disabled will NOT prevent $ENQW from waiting for a lock if the request is incompatible with an existing lock.&lt;BR /&gt;&lt;BR /&gt;  $SETRWM is intended to be used in time critical code, where you need to avoid any wait states. I wouldn't expect to see a $ENQW in such a code path!&lt;BR /&gt;&lt;BR /&gt;  Zen answer... if you have to ask questions about using $SETRWM, you shouldn't be using it! ;-)</description>
      <pubDate>Fri, 11 Jul 2008 01:46:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231288#M44305</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2008-07-11T01:46:56Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231289#M44306</link>
      <description>Hi. Thx for answers....&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I tested the SETRWM in a basic program.&lt;BR /&gt;First i do SETRWN(1) disable and on resturn checks the SS$_WASSET just to make sure.&lt;BR /&gt;&lt;BR /&gt;Then i do a ENQW for a resouce i've just lock via another window. &lt;BR /&gt;The ENQW still waits untill i do a another look to same resource. And then i get a normal deadlock.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Jul 2008 06:29:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231289#M44306</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-11T06:29:27Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231290#M44307</link>
      <description>So you are looking to lock a resource using $ENQ but do not wish to wait if the resource is not available?&lt;BR /&gt;&lt;BR /&gt;Perhaps you need the LCK$M_NOQUEUE flag.&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Jul 2008 09:44:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231290#M44307</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2008-07-11T09:44:19Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231291#M44308</link>
      <description>Well no....&lt;BR /&gt;&lt;BR /&gt;I do:&lt;BR /&gt;"Some handling with enqw"&lt;BR /&gt;SETPWM()&lt;BR /&gt;QIO&lt;BR /&gt;SETPWM()&lt;BR /&gt;"Some handling with enqw"&lt;BR /&gt;&lt;BR /&gt;The enqw are used to lock a shared area.&lt;BR /&gt;And some how this area get wrong data in it.&lt;BR /&gt;I suspect that the lock is not right.&lt;BR /&gt;I do handle deadlocks.&lt;BR /&gt;&lt;BR /&gt;I have several programs that handle this share area in the same way. &lt;BR /&gt;=&amp;gt;&lt;BR /&gt;Thats why i asked how wide the setpwm, if it's just local to the program....</description>
      <pubDate>Fri, 11 Jul 2008 11:46:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231291#M44308</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-11T11:46:08Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231292#M44309</link>
      <description>It is local the process. &lt;BR /&gt;&lt;BR /&gt;What are you trying to achieve by turning off RWM around a QIO?</description>
      <pubDate>Fri, 11 Jul 2008 15:20:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231292#M44309</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2008-07-11T15:20:52Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231293#M44310</link>
      <description>So you're not using any calls into language RTLs nor any external libraries nor any (other) system services?  Cool.  It'll do just want you want.  Have at...&lt;BR /&gt;&lt;BR /&gt;If, on the other hand, you're using language RTLs or (other) system services or such, well, those typically are not coded to expect nor to contend with having resource wait disabled.&lt;BR /&gt;&lt;BR /&gt;In most every case I've seen over the years where this has been proposed, the application is non-trivial and the call can be potentially hazardous; it's better to code the application with either an AST or a timeout or a NOW flag or other such feature.  To code the application to explicitly react appropriately under load; to stall, to back-pressure or to drop messages.&lt;BR /&gt;&lt;BR /&gt;FWIW, the potential RWM weirdnesses can be subtle and very hard to replicate, too.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Jul 2008 15:32:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231293#M44310</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-07-11T15:32:27Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231294#M44311</link>
      <description>[[[The enqw are used to lock a shared area.&lt;BR /&gt;And some how this area get wrong data in it.&lt;BR /&gt;I suspect that the lock is not right.&lt;BR /&gt;I do handle deadlocks.]]]&lt;BR /&gt;&lt;BR /&gt;Some application code is broken, yes.  &lt;BR /&gt;&lt;BR /&gt;Are any SMP systems involved here?&lt;BR /&gt;&lt;BR /&gt;Some code is not contending correctly with the shared memory around the processor caching (a particular factor when Alpha and shared memory and SMP are ill-mixed together), is going around or otherwise ignoring the locking, or other such arcana.  &lt;BR /&gt;&lt;BR /&gt;There are a wide variety of ways to go off the rails here.&lt;BR /&gt;&lt;BR /&gt;Use of RWM will almost certainly make things worse, too.&lt;BR /&gt;&lt;BR /&gt;In recent years, I've tended to avoid using or migrate away from home-grown shared memory code -- even my own code -- and move to RMS files with global buffers.  Shared memory looks good right up until you start dealing with these sorts of cases.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Jul 2008 15:45:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231294#M44311</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-07-11T15:45:33Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231295#M44312</link>
      <description>Well it's hard to predict....&lt;BR /&gt;&lt;BR /&gt;We use it when writing to mailbox.&lt;BR /&gt;by placing the SETRWM the old programmer (not me) wanted to catch if the mailbox was full and then write it to an overflow.&lt;BR /&gt;Like:&lt;BR /&gt;If SYS_STATUS = SS$_MBFULL then&lt;BR /&gt; !Overflow&lt;BR /&gt;else&lt;BR /&gt; ! else everything good..... but is it?&lt;BR /&gt;end if&lt;BR /&gt;&lt;BR /&gt;We use it in VERY close to the mailbox all.&lt;BR /&gt;At the maximum place we have a simple print function in between.......</description>
      <pubDate>Fri, 11 Jul 2008 15:48:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231295#M44312</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-11T15:48:48Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231296#M44313</link>
      <description>Regaring SMP...... Yes we got the problem when we moved to SMP. Also we run a cluster...... I'm thinking of rewriting it to use RDB... :D&lt;BR /&gt;&lt;BR /&gt;Is it possible to invoke a "CPU CACHE FLUSH" i'm not use to alpha asm... only M68K and 6811...&lt;BR /&gt;Because if i know that i have the lock, getting it to flush cache would perhaps solve the problem... We get this on rare occations, and never on same place.&lt;BR /&gt;If we reset every thing and run "exactly" as before, we don't get the fault. We are not alone on the machine.....</description>
      <pubDate>Fri, 11 Jul 2008 15:54:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231296#M44313</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-11T15:54:47Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231297#M44314</link>
      <description>[[[[Well it's hard to predict....]]]]&lt;BR /&gt;&lt;BR /&gt;The best Heisenbugs always are.&lt;BR /&gt;&lt;BR /&gt;[[[[We use it when writing to mailbox.&lt;BR /&gt;by placing the SETRWM the old programmer (not me) wanted to catch if the mailbox was full and then write it to an overflow.&lt;BR /&gt;Like:&lt;BR /&gt;If SYS_STATUS = SS$_MBFULL then&lt;BR /&gt;!Overflow&lt;BR /&gt;else&lt;BR /&gt;! else everything good..... but is it?&lt;BR /&gt;end if]]]&lt;BR /&gt;&lt;BR /&gt;If that's the actual code, it's badly broken.  Everything other than MBFULL is most definitely NOT success.&lt;BR /&gt;&lt;BR /&gt;You will want to look at the low bit of the status.  If it is set, the call worked.  If clear, the call failed.   I usually test for specific condition values of interest first (eg: MBFULL) and then fall through to a more generalized low-bit check.&lt;BR /&gt;&lt;BR /&gt;[[[We use it in VERY close to the mailbox all.]]]]&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Mailboxes have the ability to do a IO$M_NORSWAIT, which is the usual trigger for the MBFULL you're already checking for.  Which means the code is likely already skipping the resource wait related to the mailbox, so another resource wait here would largely be meaningless.&lt;BR /&gt;&lt;BR /&gt;And the mailbox isn't tied to the shared memory, so there is obviously rather more going on here.&lt;BR /&gt;&lt;BR /&gt;How many readers for this mailbox?  Zero or one is best; zero leads to a stall, one is the typical choice.  More than one is often a real problem as you're not sure which way the message is going, and traffic tends to get wedged or resequenced.&lt;BR /&gt;&lt;BR /&gt;Are ASTs in use here?   &lt;BR /&gt;&lt;BR /&gt;Are all calls specifying an IOSB and either an explicit and non-shared event flag, or the EFN$C_ENF don't-care event flag?&lt;BR /&gt;&lt;BR /&gt;[[[[Regaring SMP...... Yes we got the problem when we moved to SMP. Also we run a cluster...... ]]]]&lt;BR /&gt;&lt;BR /&gt;Ok, some more details here, please?  Are you sharing a common or a global section across nodes?&lt;BR /&gt;&lt;BR /&gt;[[[[I'm thinking of rewriting it to use RDB... :D]]]&lt;BR /&gt;&lt;BR /&gt;In all seriousness, RMS with global buffers enabled is a surprisingly good choice.&lt;BR /&gt;&lt;BR /&gt;Is it possible to invoke a "CPU CACHE FLUSH" i'm not use to alpha asm... only M68K and 6811...]]]&lt;BR /&gt;&lt;BR /&gt;There are gratuitous cache flushes here with the system service calls, but it's feasible that if you have somebody looking at the contents of the structure without benefit of the lock (while there's a parallel write going) you could get stale or inconsistent data.&lt;BR /&gt;&lt;BR /&gt;As for invoking memory barriers, sure.  No need for assembler.  There are interlocked calls, or you can call the barrier routines yourself directly or via a C wrapper.&lt;BR /&gt;&lt;BR /&gt;Here's an intro to the concepts:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://64.223.189.234/node/407" target="_blank"&gt;http://64.223.189.234/node/407&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://64.223.189.234/node/638" target="_blank"&gt;http://64.223.189.234/node/638&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;[[[Because if i know that i have the lock, getting it to flush cache would perhaps solve the problem... We get this on rare occations, and never on same place.&lt;BR /&gt;If we reset every thing and run "exactly" as before, we don't get the fault. We are not alone on the machine.....]]]&lt;BR /&gt;&lt;BR /&gt;Yep, that's typical of this class of error; to most of the shared-memory Heisenbugs.  The way out of this usually involves desk-checking the code, too.  A state table.  That, and usually simplifying the associated code, as the usual trigger I've seen on what I've debugged is a very complex interface into the shared memory area.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Jul 2008 16:35:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231297#M44314</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-07-11T16:35:49Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231298#M44315</link>
      <description>&amp;gt;Then i do a ENQW for a resouce i've just &lt;BR /&gt;&amp;gt;lock via another window. &lt;BR /&gt;&amp;gt;The ENQW still waits untill i do a another &lt;BR /&gt;&amp;gt;look to same resource. And then i get a &lt;BR /&gt;&amp;gt;normal deadlock.&lt;BR /&gt;&lt;BR /&gt;  Correct! The ENQW isn't waiting for a resource, it's waiting for the lock. Apologies if my previous explaination wasn't clear enough.&lt;BR /&gt;&lt;BR /&gt;  If all you're trying to do is detect a mailbox full, then please remove all $SETRWM calls from your code and add modifiers&lt;BR /&gt;&lt;BR /&gt;IO$M_NOW and IO$M_NORSWAIT to your write function code. Check out the I/O Users Guide to find the exact behaviour of mailbox I/Os.&lt;BR /&gt;&lt;BR /&gt;  You may also want to review your allocation of buffer space when the mailbox is created. Memory is MUCH more abundant on modern systems. Allocating more may help smooth out application flow control and synchronisation.&lt;BR /&gt;&lt;BR /&gt;$SETRWM is far more likely to cause you problems than resolve them.</description>
      <pubDate>Sun, 13 Jul 2008 21:56:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231298#M44315</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2008-07-13T21:56:47Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231299#M44316</link>
      <description>Thx for all answers.&lt;BR /&gt;&lt;BR /&gt;1. Well i'll check the mailbox handling and will rewrite it some. I'll perhaps will have some questions later.&lt;BR /&gt;&lt;BR /&gt;2.&lt;BR /&gt;The Memory Barrier use...&lt;BR /&gt;I tryed to download som manuals but it failed. So am i right if i think like this in every process.&lt;BR /&gt;&lt;BR /&gt;Get lock via enqw&lt;BR /&gt;Do the proccess of shared memory&lt;BR /&gt;Before release of the lock do a __MB(void)&lt;BR /&gt;Release lock&lt;BR /&gt;&lt;BR /&gt;thx again&lt;BR /&gt;BR&lt;BR /&gt;Roger</description>
      <pubDate>Mon, 14 Jul 2008 10:15:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231299#M44316</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-14T10:15:37Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231300#M44317</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I war wrong in my last.... i need to do the MB before.... =&amp;gt;&lt;BR /&gt;enqw&lt;BR /&gt;MB&lt;BR /&gt;do my stuff.&lt;BR /&gt;&lt;BR /&gt;but i read something that destroy my plan to use this from basic:&lt;BR /&gt;$type vms_mb.c&lt;BR /&gt;/* Memory barrier */&lt;BR /&gt;#include&lt;BUILTINS.H&gt;&lt;BR /&gt;&lt;BR /&gt;long VMS_MB()&lt;BR /&gt;{&lt;BR /&gt;    __MB();&lt;BR /&gt;    return -1;&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;The release not of 7.3 says:&lt;BR /&gt;". In addition, a memory barrier in a subroutine call between the Read FLAG and the Read/Use of the DATA will not prevent speculation. The memory barrier must be in line. "&lt;BR /&gt;&lt;BR /&gt;So my "fancy" C function will not help me?&lt;BR /&gt;&lt;BR /&gt;BR&lt;BR /&gt;Roger&lt;/BUILTINS.H&gt;</description>
      <pubDate>Mon, 14 Jul 2008 12:31:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231300#M44317</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-14T12:31:27Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231301#M44318</link>
      <description>I'd review the whole of the source code before starting to make changes.  If there is a subtle synchronization error lurking, charging in and making changes is a strategy I've found largely futile.&lt;BR /&gt;&lt;BR /&gt;I tend to follow a code review with looking for and fixing coding errors first.  Some of the usual coding errors I look for are listed here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/wizard/wiz_1661.html" target="_blank"&gt;http://h71000.www7.hp.com/wizard/wiz_1661.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;This includes proper handling of return status values, as well as uniform use and verification of the IOSB, etc.  No data from an asynchronous call can be trusted until and unless the return status and the non-shared IOSB are both checked.&lt;BR /&gt;&lt;BR /&gt;Next I look at the existing synchronization mechanisms, and at the details of what is being protected, and how it is accessed.&lt;BR /&gt;&lt;BR /&gt;I then look at how the messages are sequenced (explicitly and implicitly), and then at the memory barriers and at the word tearing.&lt;BR /&gt;&lt;BR /&gt;And if you're using the bitlock PALcode calls or the lock manager calls, memory barriers are not typically required.  MBs are used when you are changing directions with your memory accesses to a cell (eg: write, write, write, write, read), and you want all the writes to complete and coalesce before you read from a cell.  With shared memory, other key issues are cache visibility and access coordination across the processors, and this involves bitlocks or interlocked queues, or other constructs.&lt;BR /&gt;&lt;BR /&gt;Non-interlocked reads don't necessarily read from memory, they can and often do read from local processor cache, so a write to that same memory cell from another processor can be missed.  Accordingly, shared memory flags typically need be interlocked.  The interlock notifies the processors to reload their caches.&lt;BR /&gt;&lt;BR /&gt;And I'm still not sure what these mailbox messages and these lock management calls and other such have to do with the shared memory.  I'm seeing lots of pieces here, and not much of a picture of how the pieces fit together in this application.  And it's a coherent view of the whole that is needed when dealing with synchronization.&lt;BR /&gt;&lt;BR /&gt;With one selection of memory management code I remember well, I ended up looking at it and its occasional and transient crashes for some months, then (getting no where and getting frustrated with the application stability) full-time for a week or so and ended up re-writing the whole thing.  The resulting code ran far faster, and was stable -- thirty-some pages of memory management source code were reduced down to two pages, too.&lt;BR /&gt;&lt;BR /&gt;What I would do here is similar to what I have described above.  I'd first go for the so-called "low-hanging fruit" and desk-check the code (for common coding errors), and (failing that) I'd then look to analyze the footprint of the current error (yes, I'd go for coding bugs before looking at the details of the synchronization code), and would then look to simplify the source code into stability.&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs LLC&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Jul 2008 13:59:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231301#M44318</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-07-14T13:59:58Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231302#M44319</link>
      <description>Hi&lt;BR /&gt;Thx for answer.&lt;BR /&gt;&lt;BR /&gt;More info:&lt;BR /&gt;&lt;BR /&gt;When searching for the fault i ran across the SETRWM, and that was used together with mailbox.....&lt;BR /&gt;&lt;BR /&gt;For the sharedmemmory it has almost nothing to do with.&lt;BR /&gt;&lt;BR /&gt;The sharedmemory consist of an array containing structs, and a control struct.&lt;BR /&gt;The control struct hold pointer to free part of the array. Somehow this pointer gets overwritten only when we run on a SMP.&lt;BR /&gt;&lt;BR /&gt;Every thing is coded in basic, with some external functions as enqw.&lt;BR /&gt;&lt;BR /&gt;We do a lock via lock manager to the pointer in the sharedarea and when we get it we go and (in C style):&lt;BR /&gt;ptr = shared-&amp;gt;pointer&lt;BR /&gt;shared-&amp;gt;pointer = *ptr-&amp;gt;pointer&lt;BR /&gt;&lt;BR /&gt;So we have now a place in the array that is our. But because or pre execution the cache might have already got the date a from memory to the cache. So when i get the lock it might be old data.&lt;BR /&gt;&lt;BR /&gt;I've been code viewing on the desk.... alot of papper. But the code is not big, nor is it complex. It does what it should and nothing more. It wait for it's lock and then process, then release the lock.&lt;BR /&gt;&lt;BR /&gt;I'll read you text again a few times more...&lt;BR /&gt;BR&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Jul 2008 14:18:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231302#M44319</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-14T14:18:22Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231303#M44320</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;After reading TONS of guide/manuals, it seams hopeless.&lt;BR /&gt;HP Basic does not give the propper tools to handle SMP in the respect of instruction order. If it was writen in C it would be a diffrent matter.&lt;BR /&gt;&lt;BR /&gt;Then only right this todo is to rewrite it to use RMS file and place that file in memory. That would solve the cache problem.&lt;BR /&gt;&lt;BR /&gt;If nothing else exists to force "flush" cache to memory? Then i'll close this thread end of this week.&lt;BR /&gt;&lt;BR /&gt;Thx for all answers&lt;BR /&gt;&lt;BR /&gt;BR&lt;BR /&gt;Roger</description>
      <pubDate>Mon, 14 Jul 2008 15:13:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231303#M44320</guid>
      <dc:creator>Roger Strandberg SEB</dc:creator>
      <dc:date>2008-07-14T15:13:33Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231304#M44321</link>
      <description>&amp;gt;&amp;gt; Then only right this todo is to rewrite it to use RMS file and place that file in memory. That would solve the cache problem.&lt;BR /&gt;&lt;BR /&gt;Admittedly i did not read this whole stream, but suddenly the word RMS showed up, and I must say the above sentence looks odd (as in someone is clueless!.. That someone could be me, or...)&lt;BR /&gt;&lt;BR /&gt;If Memory Barriers are a concern one (or, me) typically thinks about timing problem with dozens of instructions, often protected by a spinlock.&lt;BR /&gt;&lt;BR /&gt;RMS records operations take millions of instructions (ok... many thousands) and often use several locks.&lt;BR /&gt;&lt;BR /&gt;OpenVMS locking sits nicely in the middle and is probably the safe and easy solution to your problem. It takes hundreds (low thousands) instructions and 'does the right things' for up to 500,000 times per second on a fast box.&lt;BR /&gt;&lt;BR /&gt;An optimal solution with buffers in shared memory probably can be founs using LIB$INSQHI and friends:&lt;BR /&gt;&lt;BR /&gt;"When you use these routines, cooperating processes can communicate without&lt;BR /&gt;further synchronization and without danger of being interrupted, either on a&lt;BR /&gt;single processor or in a multiprocessor environment. The queue access routines&lt;BR /&gt;are also useful in an AST environment; they allow you to add or remove an entry&lt;BR /&gt;from a queue without being interrupted by an AST."&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; But because or pre execution the cache might have already got the date a from memory to the cache. &lt;BR /&gt;&lt;BR /&gt;NO.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Jul 2008 16:27:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231304#M44321</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-07-14T16:27:57Z</dc:date>
    </item>
    <item>
      <title>Re: How wide is SYS$SETRWM</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231305#M44322</link>
      <description>I tend to go over to  use of RMS (with global buffers) fairly quickly when presented with these sorts of issues, as it deals with this stuff for me.  If I really need speed -- more than I can get with tuning the RMS -- then I start looking at shared memory and at solutions other than RMS.&lt;BR /&gt;&lt;BR /&gt;Sure, RMS is fairly heavyweight.  Conversely, code that deals with the same sorts of cases will also be heavyweight, and you'll end up supporting it.  (TANSTAAFL here.  Sure, direct shared memory and bitlocks is usually fairly lightweight.  But it never seems to end there...)&lt;BR /&gt;&lt;BR /&gt;The combination of using existing code (eg: RMS) and throwing hardware at the problem can be a cheap solution.&lt;BR /&gt;&lt;BR /&gt;Now as for reviewing and desk-checking the existing code (the "low-hanging fruit" before more work on the code, or before considering a rewrite), that's something best discussed off-line.  I'm getting the distinct impression I do not know what's (really) going on with the code in question, as the more I read the responses here, the more confused I get.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Jul 2008 16:39:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/how-wide-is-sys-setrwm/m-p/4231305#M44322</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-07-14T16:39:31Z</dc:date>
    </item>
  </channel>
</rss>

