Operating System - OpenVMS
1752790 Members
6022 Online
108789 Solutions
New Discussion юеВ

clock_gettime resolution on openvms is 1 millisecond

 
Eric Klingelberger
New Member

clock_gettime resolution on openvms is 1 millisecond

I am running OpenVMS on an HP Integrity box and referencing the system clock using clock_gettime
in order to measure my application latency, but it appears that the resolution I am getting from this call is 1 millisecond. I ran a test loop and the minimum positive difference between to successive calls to clock_gettime was 1 millisecond. There were more than 2 million calls to clock_gettime per second in the test loop. Is there a system wide clock that I can reference in this environment with a microsecond resolution or better?
4 REPLIES 4
John Gillings
Honored Contributor

Re: clock_gettime resolution on openvms is 1 millisecond

Eric,

I'm not sure what language or versions you're talking about, or where "clock_gettime" obtains its values.

I'm surprised you're even getting 1 msec. The software clock interrupt is usually 10 msec intervals on VAX and Alpha (though perhaps the granularity has been reduced on Integrity?)

Remember your computer is not a chronometer. Keeping very accurate, fine granularity time is not conducive to high performance, especially on multi user systems!

Usually the best you can do for fine grained timing is to use a cycle counter. On Alpha there were per process and per system cycle counters rpcc and rscc, accessed through special macros, depending on what language you're using. Unfortunately it's non-trivial to handle general time periods, as you need to deal with rollover of the 32 bit register.

On IA64 there's a register called AR44 which is a 64 bit interval timer. You'll need to find your language specific mechanism for accessing it (but since it's 64 bit, it should be easier to manager than rpcc or rscc :-)
A crucible of informative mistakes
Steven Schweda
Honored Contributor

Re: clock_gettime resolution on openvms is 1 millisecond

> I'm not sure [...]

HELP CRTL CLOCK_GETTIME

Interestingly, clock_getres() reports 100ns
resolution, and a clock_gettime() value seems
to look as if that were true, but getting a
change smaller than 1ms seems to be
difficult (Alpha (XP1000), VMS V7.3-2, and
IA64 (zx2000) VMS V8.3-1H1). Although on
the Alpha, I can get values which change
from, say, X.468064200 to X.469040700 to
X.470017200, while on the IA64, the lower
digits seem to come up the same always.

It's a mystery.
John Gillings
Honored Contributor

Re: clock_gettime resolution on openvms is 1 millisecond

re: Steven,

>Interestingly, clock_getres() reports
>100ns resolution,

The OpenVMS software clock keeps time as a signed 64 bit integer counting 100ns intervals since 17-NOV-1858 00:00:00.00

Positive values represent absolute times, negative values represent delta times.

The system time is stored in EXE$GQ_SYSTIME. So, maybe clock_gettime returns that value converted into the appropriate C data type?

Although the resolution is indeed 100ns, the clock is only updated every 10ms (although from Eric's observation, perhaps that's been reduced to 1ms on IA64?). The gory details are in the Internals and Data Structures Manual. There's a chapter called "Scheduling and Time Support"

>the lower digits seem to come up the
>same always.

That depends on the radix you're using to display the value, and the tick increment expressed in that radix. The tick increment can vary if you're using a time synchronization mechanism, so you may see a long run of values with the low bits invariant, then a change resulting from a clock speed adjustement.
A crucible of informative mistakes
Michael Moroney
Frequent Advisor

Re: clock_gettime resolution on openvms is 1 millisecond

On Itanium (HP rx3600) EXE$GL_TICKLENGTH = 10000 (2710 hex), meaning its clock gets incremented that many 100nS units per tick. That works out to be 1 tick per ms.

On VAX EXE$GL_TICKLENGTH = 100000 (186A0 hex) meaning it increments 100000 100 nanosecond units per tick. In other words, the clock ticks once per 10 ms.

Alpha (DS10L) is odd. EXE$GL_TICKLENGTH bounces back and forth between 2625 hex and 2626 hex. (9765 and 9766 decimal). That's because its HW clock ticks 1024 times per second, and 1/1024 sec is not an even multiple of 100 ns, so apparently the tick length constantly changes.

This might be platform-dependent (although I think all VAXes have 10ms ticks)