<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150282#M26230</link>
    <description>Dan,&lt;BR /&gt;&lt;BR /&gt;Have you verified that the computation of wake times does not have some slip in it? &lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
    <pubDate>Tue, 13 Jan 2009 20:23:59 GMT</pubDate>
    <dc:creator>Robert Gezelter</dc:creator>
    <dc:date>2009-01-13T20:23:59Z</dc:date>
    <item>
      <title>OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150281#M26229</link>
      <description>We are using a SYS$SCHDWK call to run a process 50 times a second. It works except every six hours from boot time it misses a few cycles. This is very repeatable. It does not matter if the process is running for hours or for a few minutes. Six hours after boot time and every six hours after that (almost to the millisecond) it misses cycles. I think the system time may be getting reset but we have shut down almost every on the system except for VMS and network and it still happens. NTP is not running. Has anyone seen this issue before?</description>
      <pubDate>Tue, 13 Jan 2009 19:29:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150281#M26229</guid>
      <dc:creator>Dan R Farrell</dc:creator>
      <dc:date>2009-01-13T19:29:42Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150282#M26230</link>
      <description>Dan,&lt;BR /&gt;&lt;BR /&gt;Have you verified that the computation of wake times does not have some slip in it? &lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Tue, 13 Jan 2009 20:23:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150282#M26230</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-01-13T20:23:59Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150283#M26231</link>
      <description>50 times a second is very often!&lt;BR /&gt;&lt;BR /&gt;I know that VMS on ia64 (not necessarily 8.3-1H1) stores the value of the TOY clock on disk approximately every 6 hours.  I suspect that your problem is related to this.</description>
      <pubDate>Tue, 13 Jan 2009 20:24:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150283#M26231</guid>
      <dc:creator>Richard Whalen</dc:creator>
      <dc:date>2009-01-13T20:24:31Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150284#M26232</link>
      <description>Dan,&lt;BR /&gt;&lt;BR /&gt;Anything that can block scheduling has the potential to cause a wake up to be delayed.&lt;BR /&gt;&lt;BR /&gt;Or as Bob suggested, depending on how you are computing the next wakeup, you may be getting rounding errors.  For example, the LIB$CVTF_TO_INTERNAL_TIME may be subject to rounding errors with small intervals.  If you are using integer math, then I doubt that is the cause of the problem.&lt;BR /&gt;&lt;BR /&gt;There are several possibilities, either higher priority processes could be preventing the process from being scheduled or some high IPL code could be blocking process scheduling or even the hardware clock interrupt.&lt;BR /&gt;&lt;BR /&gt;I am not sure how often the BBW battery backed up watch gets updated, but hopefully that couldn't cause cycles to be lost, and I wouldn't expect any disk I/O to be blocking scheduling.&lt;BR /&gt;&lt;BR /&gt;If it is that repeatable, I would fire up the PRF SDA extension to collect samples starting 10 seconds or so prior to a 6-hour epoch and see what is happening.  I'm reasonably sure it isn't driven of the HDWCLK interrupt so I believe it has a chance at seeing code executing at HWCLK IPL. If PRF is using EXE$GQ_SYSTIME in its time stamp calculations, it may lead to false conclusions about "when" something happened.&lt;BR /&gt;&lt;BR /&gt;Do you have other timer based code running, or some performance data collector that could be running something at high IPL periodically?&lt;BR /&gt;&lt;BR /&gt;Jon&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Jan 2009 22:07:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150284#M26232</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2009-01-13T22:07:49Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150285#M26233</link>
      <description>Dan,&lt;BR /&gt;  Just to clarify...&lt;BR /&gt;&lt;BR /&gt;I'm assuming you're calling $SCHDWK with a "reptim" value of 20msec, as opposed to calling it every cycle with a "daytim" of 20msec?&lt;BR /&gt;&lt;BR /&gt;Could you show us the actual code?&lt;BR /&gt;&lt;BR /&gt;A few things I'd worry about... &lt;BR /&gt;&lt;BR /&gt;1) first 20msec is only 2 quanta. I'd want my value to be as close to the real value as possible. If I cared about it a lot, I'm not sure I'd trust a $BINTIM conversion to do that for me. I'd be checking the bits in the time value.&lt;BR /&gt;&lt;BR /&gt;2) The timing of the $WAKE has no influence on when the target process responds (ie: actually wakes up). How can you distinguish between the $WAKE being late, and the process "sleeping in"?&lt;BR /&gt;&lt;BR /&gt;3) $HIBER/$WAKE seems like a rather blunt instrument to use if you require high precision ticks. Maybe you should consider other possibilities? For really accurate, high frequency timing, you pretty much have to dedicate a CPU and busy wait.&lt;BR /&gt;&lt;BR /&gt;Things to try...&lt;BR /&gt;&lt;BR /&gt;What happens if you double the frequency to 10msec? &lt;BR /&gt;&lt;BR /&gt;If you haven't done so already, build an absolutely minimal test program. On waking, don't do anything other than sample the time and put the results in a ring buffer.&lt;BR /&gt;&lt;BR /&gt;Are you running with multiple CPUs? Have you tried using affinity?</description>
      <pubDate>Tue, 13 Jan 2009 22:08:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150285#M26233</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2009-01-13T22:08:43Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150286#M26234</link>
      <description>I'm actually somewhat surprised this works as well as this does and you're only seeing a few cycles every six hours; this looks to be a polling-based design, though somewhat cloaked in the garb of a multiprocessing application.   And I'd expect to see a few cycles going to other tasks here and there.&lt;BR /&gt;&lt;BR /&gt;I might well look to abscond with a core here and go to full-on polling, rather than a 50 Hz (60 Hz in the US?) solution.  That, or (depending on what is going on) I'd look to start dealing with the cruft in an out-board processor here, as those are cheap.  There are also ways to release the processor through the scheduler interface, too.&lt;BR /&gt;&lt;BR /&gt;Do call HP, as they're the arbiters of this sort of thing and (if you're doing 50 process activations a second) you probably have a support contract.&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Jan 2009 22:16:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150286#M26234</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-01-13T22:16:47Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150287#M26235</link>
      <description>I hope by "run a process" Dan didn't mean an image activation.  I assumed his process was scheduling a wakeup and hibernating.  For that, 50 times a second shouldn't be taxing things (on average), as long as his process (kernel thread) software priority is in realtime range.  I don't think VMS claims to be REALTIME, at least in the general case, and if this node is part of a cluster, then all bets are off.&lt;BR /&gt;&lt;BR /&gt;Dan, if you really need something hard scheduled 50 times a second, I would be looking at a dedicated collection box that can weather the peak demands, cluster transitions etc.&lt;BR /&gt;&lt;BR /&gt;John, I wasn't aware that the VMS schedular waited until quatum end to reschedule a sufficiently higher priority process.  If it does, then either things have changed, or my memory is incorrect.&lt;BR /&gt;&lt;BR /&gt;Jon</description>
      <pubDate>Tue, 13 Jan 2009 23:16:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150287#M26235</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2009-01-13T23:16:58Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150288#M26236</link>
      <description>re: Jon, "or my memory is incorrect."&lt;BR /&gt;&lt;BR /&gt;  Sorry, maybe I wasn't clear enough. My remark "20msec is only 2 quanta" wasn't referring to the SYSTEM parameter QUANTUM. I was referring to the limit of the "reptim" parameter:&lt;BR /&gt;&lt;BR /&gt;(from docs) "The time interval specified cannot be less than 10 milliseconds; if it is, $SCHDWK automatically increases it to 10 milliseconds."&lt;BR /&gt;&lt;BR /&gt;  The issue is potentially one of granularity. When you're down at that level, even small absolute errors in calculating time intervals can be large percentage errors.&lt;BR /&gt;&lt;BR /&gt;  It's also unclear from the documentation if 10 msec is just a lower limit, or a granularity. Would a request for (say) 14msec be rounded up to 20msc or down to 10msec?&lt;BR /&gt;&lt;BR /&gt;  When you're this close to the documented limits, and you care enough about the exact behaviour to ask a question like this one, I'd be strongly recommending having a look at the sources to see exactly how $SCHDWK uses its parameters and calculates the time intervals to generate the $WAKEs.&lt;BR /&gt;&lt;BR /&gt;  Always remember, a computer is NOT a chronometer. You cannot rely on one for high precision or fine grained time, other than spending big bucks on purpose built, real time systems.</description>
      <pubDate>Wed, 14 Jan 2009 00:48:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150288#M26236</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2009-01-14T00:48:23Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150289#M26237</link>
      <description>Thanks for the responses. We did create a test program and are now running it at 10 ms in order to push things a bit and it is running at priority 55. We are using $SCHDWK with a repeat time value and are not calling it every cycle. It is now the only thing running on the Itanium box except for VMS, Decnet and TCP/IP. It is not part of a cluster. I guess my question is mainly that it does seem to work fine 99.99% of the time except for those 6 hour intervals. The synchronous nature of the event seems to indicate something else happening. I would expect more randomness from the event if it was related to any OS scheduling issue or something else also running at an elevated priority. We also created another test program using SETIMR and it does the same thing. I agree that if we really want guaranteed fixed 20 ms response we should probably use a hardware solution but we thought this would be good enough (and seemed to be in preliminary tests). &lt;BR /&gt;</description>
      <pubDate>Wed, 14 Jan 2009 14:05:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150289#M26237</guid>
      <dc:creator>Dan R Farrell</dc:creator>
      <dc:date>2009-01-14T14:05:13Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150290#M26238</link>
      <description>Dan,&lt;BR /&gt;&lt;BR /&gt;If I may put on my architecture hat and make a few observations.&lt;BR /&gt;&lt;BR /&gt;I would not necessarily rush out for a hardware solution, but I would consider something in the nature of an IO driver for this type of task. OpenVMS time handling is subject to some imprecision, as John and others have noted. If something must be monitored precisely at a resolution that close to the precision of the system services, they are not appropriate.&lt;BR /&gt;&lt;BR /&gt;I have seen this general genre of problem throughout my career, starting with second-generation PDP-11 systems. The answer is almost invariably the same: For high precision timing, get an external oscillator running at a significantly higher frequency, and have it interrupt every the counter gets to zero. At that point, use a device driver to perform the immediate actions and forward the summarized information to a process/task for more complete processing.&lt;BR /&gt;&lt;BR /&gt;Since the time-critical portions of this code are in the driver's interrupt handling, little is likely to interfere with it.&lt;BR /&gt;&lt;BR /&gt;For completeness, I note that just because one has not noticed an overhead operation lasting .02 second or so does not mean that they are not there. While cluster transitions and similar activities are well known, I would assume that there are other activities that can create similar situations. Jeff Schreisheim (formerly of the DECnet-11/RSX team) did a very nice article in Computer Design many years ago on why DECnet-RSX ended up implementing COMMEXEC, a special executive supplement to provide services needed by DECnet protocol modules. It makes very good reading even today.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Wed, 14 Jan 2009 16:15:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150290#M26238</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-01-14T16:15:58Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150291#M26239</link>
      <description>I was once more solidly in the camp with Bob G. here around writing a driver for these cases, but (with the cost of Arduino and other such solutions) I've variously found tossing hardware at the problem cheaper than tossing a driver at it.  &lt;BR /&gt;&lt;BR /&gt;In years past, PLC-like approaches were both hairy and expensive, but that's changed.&lt;BR /&gt;&lt;BR /&gt;There are also PCI-based PLCs around, though these do tend to require a driver.&lt;BR /&gt;&lt;BR /&gt;Whether Arduino or another PLC-like solution is appropriate here does depend on what your responsiveness and timing and bandwidth and connection requirements might be, of course. &lt;BR /&gt;&lt;BR /&gt;If you have tight requirements (and can't loosen those requirements through added hardware), then moving to what amounts to an application-dedicated core (such as the dedicated lock manager) could also be an option.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 14 Jan 2009 17:06:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150291#M26239</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-01-14T17:06:17Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150292#M26240</link>
      <description>Dan,&lt;BR /&gt;&lt;BR /&gt;What does your test program do, and how do you detect a "missed cycle"?&lt;BR /&gt;&lt;BR /&gt;I don't have an Itanium to test on, but on Alpha's the granularity of reptim is the HWCLK interrupt, not 10ms as stated in the SSREF documentation.&lt;BR /&gt;&lt;BR /&gt;Attached is a C program and example logs from an ES40 and ES47 both running VMS 7.3-2&lt;BR /&gt;&lt;BR /&gt;It sets the reptim to the smallest delta time possible: -1 (1 clunk or 100 nanoseconds).  The right hand column is the number of "clunks" (100ns VMS time clock units) since previous contents of EXE$GQ_SYSTIME (via $gettim).  Note these are not 100000 (10ms), instead they are a minimum of EXE$TICK_WIDTH.&lt;BR /&gt;&lt;BR /&gt;Several anomalies (these were all running at normal, interactive priority 4).&lt;BR /&gt;&lt;BR /&gt;ES40 &lt;BR /&gt;&lt;BR /&gt; 884  15-JAN-2009 00:33:57.94  0x00A85A316AD27D2F  9765&lt;BR /&gt; 885  15-JAN-2009 00:33:57.94  0x00A85A316AD2A354  9765&lt;BR /&gt; 886  15-JAN-2009 00:33:57.94  0x00A85A316AD2C979  9765&lt;BR /&gt; 887  15-JAN-2009 00:33:57.94  0x00A85A316AD2EF9E  9765&lt;BR /&gt; 888  15-JAN-2009 00:33:57.95  0x00A85A316AD33BE8 19530&lt;BR /&gt; 889  15-JAN-2009 00:33:57.95  0x00A85A316AD33BE8     0&lt;BR /&gt; 890  15-JAN-2009 00:33:57.95  0x00A85A316AD3620D  9765&lt;BR /&gt; 891  15-JAN-2009 00:33:57.95  0x00A85A316AD38832  9765&lt;BR /&gt; 892  15-JAN-2009 00:33:57.95  0x00A85A316AD3AE57  9765&lt;BR /&gt;&lt;BR /&gt;ES47&lt;BR /&gt;&lt;BR /&gt; 203  15-JAN-2009 00:44:43.70  0x00A85A32EBB98637 10257&lt;BR /&gt; 204  15-JAN-2009 00:44:43.70  0x00A85A32EBB9AE48 10257&lt;BR /&gt; 205  15-JAN-2009 00:44:43.70  0x00A85A32EBB9D659 10257&lt;BR /&gt; 206  15-JAN-2009 00:44:43.72  0x00A85A32EBBCCF9C 194883 ! JLP this is 19 * 10257&lt;BR /&gt; 207  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD 10257&lt;BR /&gt; 208  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 209  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 210  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 211  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 212  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 213  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 214  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 215  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 216  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 217  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 218  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 219  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 220  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 221  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 222  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 223  15-JAN-2009 00:44:43.72  0x00A85A32EBBCF7AD     0&lt;BR /&gt; 224  15-JAN-2009 00:44:43.72  0x00A85A32EBBD1FBE 10257&lt;BR /&gt; 225  15-JAN-2009 00:44:43.73  0x00A85A32EBBD47CF 10257&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;1331  15-JAN-2009 00:44:44.86  0x00A85A32EC6A6141 10257&lt;BR /&gt;1332  15-JAN-2009 00:44:44.86  0x00A85A32EC6A8952 10257&lt;BR /&gt;1333  15-JAN-2009 00:44:44.86  0x00A85A32EC6AF0C0 26478 JLP 26478 = (2 * 10257) + 5964 (accuracy bonus??)&lt;BR /&gt;1334  15-JAN-2009 00:44:44.86  0x00A85A32EC6AF0C0     0&lt;BR /&gt;1335  15-JAN-2009 00:44:44.86  0x00A85A32EC6B18D1 10257&lt;BR /&gt;1336  15-JAN-2009 00:44:44.87  0x00A85A32EC6B40E2 10257&lt;BR /&gt;&lt;BR /&gt;Jon&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Jan 2009 07:34:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150292#M26240</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2009-01-15T07:34:48Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150293#M26241</link>
      <description>Sorry, I accidently clicked submit when I meant to click browse for the attachment.&lt;BR /&gt;&lt;BR /&gt;Here's the C program snd log files.&lt;BR /&gt;&lt;BR /&gt;Jon</description>
      <pubDate>Thu, 15 Jan 2009 07:36:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150293#M26241</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2009-01-15T07:36:55Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150294#M26242</link>
      <description>Running Jon's program on &lt;BR /&gt;&lt;BR /&gt;OpenVMS V8.3-1H1  HP BL860c  (1.67GHz/9.0MB), with 2 cores active&lt;BR /&gt;&lt;BR /&gt;I increased the samples to 20000 at normal priority and all came back as 10000. There were no anomolies.&lt;BR /&gt;&lt;BR /&gt;I then started 4 batch jobs at interactive priority running a dcl loop to saturate the CPU:&lt;BR /&gt;&lt;BR /&gt;$ loop: goto loop&lt;BR /&gt;&lt;BR /&gt;This resulted in only a few skips:&lt;BR /&gt;&lt;BR /&gt;    0  16-JAN-2009 08:12:12.17  0x00A85B3A991227BA&lt;BR /&gt;  828  16-JAN-2009 08:12:13.00  0x00A85B3A9990A68A 20000&lt;BR /&gt; 3558  16-JAN-2009 08:12:15.73  0x00A85B3A9B31FA7A 60000&lt;BR /&gt; 5035  16-JAN-2009 08:12:17.21  0x00A85B3A9C141D1A 60000&lt;BR /&gt; 6657  16-JAN-2009 08:12:18.84  0x00A85B3A9D0C5FCA 60000&lt;BR /&gt; 8804  16-JAN-2009 08:12:20.99  0x00A85B3A9E54BE4A 60000&lt;BR /&gt;10293  16-JAN-2009 08:12:22.49  0x00A85B3A9F388E9A 50000&lt;BR /&gt;11899  16-JAN-2009 08:12:24.10  0x00A85B3AA02E393A 50000&lt;BR /&gt;14048  16-JAN-2009 08:12:26.25  0x00A85B3AA176499A 20000&lt;BR /&gt;15554  16-JAN-2009 08:12:27.76  0x00A85B3AA25CB1FA 50000&lt;BR /&gt;17160  16-JAN-2009 08:12:29.37  0x00A85B3AA3525C9A 50000&lt;BR /&gt;19305  16-JAN-2009 08:12:31.52  0x00A85B3AA49A6CFA 60000&lt;BR /&gt;&lt;BR /&gt;Increasing the count to 60000, and running as a batch job on a quiet system resulted in only one skip:&lt;BR /&gt;&lt;BR /&gt;28782  16-JAN-2009 08:20:03.56  0x00A85B3BB20ADDBA 20000&lt;BR /&gt;&lt;BR /&gt;the run time was just over one minute. Boot time was 15-DEC-2008 09:28:36.00, so I'll schedule the 60000 sample run for 09:28 and see if anything interesting happens at 09:28:36&lt;BR /&gt;(or maybe 09:28:37, given the leap second over new year? ;-)&lt;BR /&gt;&lt;BR /&gt;report back in a couple of hours...&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Jan 2009 21:26:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150294#M26242</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2009-01-15T21:26:59Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150295#M26243</link>
      <description>I'm not sure what this means!&lt;BR /&gt;&lt;BR /&gt;The job started on time at 09:28:00 and completed at 09:29:01.44. The samples that weren't exactly 10000 apart were:&lt;BR /&gt;&lt;BR /&gt;    0  16-JAN-2009 09:28:00.03  0x00A85B452FCE39FA&lt;BR /&gt; 991  16-JAN-2009 09:28:01.02  0x00A85B453065E61A 40000&lt;BR /&gt;51609  16-JAN-2009 09:28:51.64  0x00A85B454E91BECA 20000&lt;BR /&gt;51610  16-JAN-2009 09:28:51.64  0x00A85B454E91BECA     0&lt;BR /&gt;&lt;BR /&gt;So nothing suspicious at the 6 hour multiple from boot time. Maybe clock drift for whatever event is causing Dan's anomoly?&lt;BR /&gt;&lt;BR /&gt;I've scheduled the job to run at 6 hour intervals for the next day or so to see if any pattern emerges.&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Jan 2009 22:44:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150295#M26243</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2009-01-15T22:44:15Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150296#M26244</link>
      <description>When I did a real time code, long ago, the unexpected thing that tripped me up was image rundowns would take relatively long times to cleanup large address spaces and shceduling was blocked while doing so.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 16 Jan 2009 16:05:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150296#M26244</guid>
      <dc:creator>David Jones_21</dc:creator>
      <dc:date>2009-01-16T16:05:00Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150297#M26245</link>
      <description>Big global section flushes were a trigger for pauses at various sites.</description>
      <pubDate>Fri, 16 Jan 2009 17:00:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150297#M26245</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-01-16T17:00:18Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150298#M26246</link>
      <description>Dan,&lt;BR /&gt;&lt;BR /&gt;If you are still reading this thread...&lt;BR /&gt;&lt;BR /&gt;I had suggested the PRF tool, but looking at some notes I had from bootcamp, it does not store any timestamps, so it would probably not let you see cause and effect.&lt;BR /&gt;&lt;BR /&gt;Probably the best SDA extension is SPL (Spinlock tracing)&lt;BR /&gt;&lt;BR /&gt;I would suggest submitting sys$examples:spl.com about 10 seconds prior to when you expect the missed cycle.&lt;BR /&gt;&lt;BR /&gt;Then look at the section of in the analysis file that has the following heading:&lt;BR /&gt;&lt;BR /&gt;Long Spinlock Hold Times (&amp;gt; 1000 microseconds)&lt;BR /&gt;&lt;BR /&gt;My guess is it will give you a good clue.  For example, when I ran my program on an ES40 with 21000 samples, and looked for long ticks, here is what I found:&lt;BR /&gt;&lt;BR /&gt;(18:53:18) $ run fast_schdwk&lt;BR /&gt;   0  16-JAN-2009 18:53:18.60  0x00A85B9428D9D0A8&lt;BR /&gt;5099  16-JAN-2009 18:53:23.58  0x00A85B942BD25258 58590&lt;BR /&gt;6073  16-JAN-2009 18:53:24.54  0x00A85B942C63A3F2 22265&lt;BR /&gt;13297  16-JAN-2009 18:53:31.59  0x00A85B943098C6C3 58590&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The 22265 is due to the accuracy bonus which shouldn't affect you as long as the HWCLK int is 1000Hz.&lt;BR /&gt;&lt;BR /&gt;Here's what was in the SPL analysis file.&lt;BR /&gt;&lt;BR /&gt;Long Spinlock Wait Times (&amp;gt; 1000 microseconds)&lt;BR /&gt;&lt;BR /&gt;Timestamp              CPU Spinlock | Forklock   Calling PC | Forking PC                EPID      Wait (us)&lt;BR /&gt;---------------------- --- --------------------- -------------------------------------- --------  ---------&lt;BR /&gt;16-JAN 18:53:31.593694  03 8C5BA800 LCKMGR       801D7400 EXE$DEQ_C+000F0               202004E9       5322&lt;BR /&gt;16-JAN 18:53:39.605956  03 8C5BA800 LCKMGR       801D29D0 EXE$ENQ_C+00900               20257DBC       5311&lt;BR /&gt;16-JAN 18:53:31.593498  01 8C5BA800 LCKMGR       801D29D0 EXE$ENQ_C+00900               20257DBC       5278&lt;BR /&gt;16-JAN 18:53:23.583059  02 8C5BA800 LCKMGR       801D29D0 EXE$ENQ_C+00900               20257DBC       5208&lt;BR /&gt;&lt;BR /&gt;Other than being interesting, I am not sure that knowing what the cause is will help you.  Unless your process remains CUR, it will be subject to the whims of the scheduler and that can be blocked (for short periods) by many things.&lt;BR /&gt;&lt;BR /&gt;What is the process doing?  I.e. does it need full process context?  If it does not, then you may be able to hook the HWCLK Interrupt and store stuff in a ring buffer in Non-Paged pool.  Perhaps an SDA extension like PCS. The HWCLK interrupt runs at sufficiently high IPL that it won't normally get blocked, but you don't want to do any substantial processing at that IPL either.&lt;BR /&gt;&lt;BR /&gt;A dedicated Itanium core is quite expensive if what it is doing can be done by a dedicated micro controller like Hoff mentioned.  &lt;BR /&gt;&lt;BR /&gt;Hoff, the Arduino looks interesting. Thanks for the reference.  I assume you meant this &lt;A href="http://www.arduino.cc/" target="_blank"&gt;http://www.arduino.cc/&lt;/A&gt; and &lt;A href="http://en.wikipedia.org/wiki/Arduino" target="_blank"&gt;http://en.wikipedia.org/wiki/Arduino&lt;/A&gt; &lt;BR /&gt;&lt;BR /&gt;Jon</description>
      <pubDate>Sat, 17 Jan 2009 02:39:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150298#M26246</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2009-01-17T02:39:49Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150299#M26247</link>
      <description>Jon; yes, that's the widget.  One of various.  There are all manner of similar options that can be used to off-load host boxes for various of these tasks; to move the timing-critical activities from timing-adverse platforms.  This whether the option is out-board or bus-based or USB-based or LAN-based.  Proper choice here depends on how high the bandwidth and how low the latency; on the application requirements.</description>
      <pubDate>Sun, 18 Jan 2009 02:11:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150299#M26247</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-01-18T02:11:15Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS 8.3-1H1  Itanium SYS$SCHDWK call</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150300#M26248</link>
      <description>&lt;!--!*#--&gt;Some more samples at 6 hour intervals:&lt;BR /&gt;&lt;BR /&gt;    0  16-JAN-2009 15:28:00.03  0x00A85B777A68CFDE&lt;BR /&gt;  988  16-JAN-2009 15:28:01.02  0x00A85B777B0054EE 60000&lt;BR /&gt;52147  16-JAN-2009 15:28:52.18  0x00A85B77997EBA6E 20000&lt;BR /&gt;&lt;BR /&gt;    0  16-JAN-2009 21:28:00.05  0x00A85BA9C506548A&lt;BR /&gt;11949  16-JAN-2009 21:28:12.00  0x00A85BA9CC25C16A 20000&lt;BR /&gt;&lt;BR /&gt;    0  17-JAN-2009 03:28:00.07  0x00A85BDC0FA51346&lt;BR /&gt;14210  17-JAN-2009 03:28:14.29  0x00A85BDC181D8076 20000&lt;BR /&gt;&lt;BR /&gt;    0  17-JAN-2009 09:28:00.05  0x00A85C0E5A3BF136&lt;BR /&gt;55670  17-JAN-2009 09:28:55.72  0x00A85C0E7B6AA9A6 20000&lt;BR /&gt;&lt;BR /&gt;    0  17-JAN-2009 15:28:00.05  0x00A85C40A4D60312&lt;BR /&gt; 4098  17-JAN-2009 15:28:04.15  0x00A85C40A7477842 20000&lt;BR /&gt;49629  17-JAN-2009 15:28:49.68  0x00A85C40C26B1A02 20000&lt;BR /&gt;&lt;BR /&gt;Looks fairly random to me. Indeed, what surprises me most about this experiment is just how FEW wakeups are missed. Just one or two out of 60000 for each run&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 18 Jan 2009 22:34:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-8-3-1h1-itanium-sys-schdwk-call/m-p/5150300#M26248</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2009-01-18T22:34:09Z</dc:date>
    </item>
  </channel>
</rss>

