Operating System - OpenVMS
Showing results for 
Search instead for 
Did you mean: 

TIme stamp accuracy in Itanium


TIme stamp accuracy in Itanium

I have extracted this statement from HoffmanLabs about time stamp accuracy:

The VAX hardware interrupts occur at centisecond (10 millisecond; 10 ms) intervals.  The centisecond is the limit of the accuracy, though the precision of the quadword time storage is 100 nanoseconds (100 ns).

Alpha system hardware can interrupt at and can update the time at a rate of 1024 ticks per second, and those Alpha systems that do this can then periodically employ what is called an “accuracy bonus” to cause the average tick rate to be 1000 ticks per second; the same as VAX.  This means that both the accuracy is less than the precision of the time values, and that there's a very slight drift within the time values, until the drift is reset with the next &ldqup;accuracy bonus”.  


So VAX and Alpha have a centisecond (10ms) accuracy?  Time stamps in files have centiseconds accuracy only.


Does Itanium have the same centisecond accuracy even if the precision is in nanoseconds (ns)? How many ticks per second can Itanium do to update the time?




Hein van den Heuvel
Honored Contributor

Re: TIme stamp accuracy in Itanium


Check out the following excellent OpenVMS techincal Journal #15 article.

Come back when questions remain






John Gillings
Honored Contributor

Re: TIme stamp accuracy in Itanium



   Your questions confuse "Precision" and "Accuracy" (not helped by the text you've quoted using both terms apparently interchangeably). Percision and Accuracy are very different things, please think about what you really want to ask.


"Precision" is the smallest time difference that the system can/will discriminate. For OpenVMS standard time services the theoretical precision is 100nsec (the definition of the data type), but in practice it's 10msec (the actual interval at which clock is incremented). The article Hein refers to discusses new services on Itanium which can return higher precision times, but says nothing about Accuracy.


"Accuracy" refers to how closely the time tracks some reference source (with all the theoretical issues surrounding comparison of clocks, cue Mr Einstein)


Regarding  "Timestamp accuracy"  I'm assuming you mean precision? As far as I know the file system uses the lower precision time services, so the precision of file system time stamps is, in practice, 10msec. I don't expect that to change, but, in theory, you could write code which uses the higher precision time services to write your own time stamps.


Regarding accuracy, you'll need to look further. From memory, the architected accuracy is 0.01% for the software clock and 0.005% for hardware, both of which are rather poor when compared with (say) a standard wristwatch. Again from memory, I believe the software clock can drift by as much as 8 seconds per day and still be within architected tolerance. In practice systems usually perform much better, especially if you use a time sync service, like NTP. You need to remember that your computer is NOT a chronometer (on the other hand, those tolerances were set back in the 1970's, so the cost and performance of hardware have both improved significantly)


The costs of improving accuracy is typically exponential, so manufacturers choose a compromise which minimises cost and meet the expected requirements of a majority of customers. If you have specific requirements for higher accuracy, you can purchase external time sources which improve accuracy (but not precision).  

A crucible of informative mistakes
Honored Contributor

Re: TIme stamp accuracy in Itanium

To John: the cited text intends to use the terms "precision" (fractional digits) and "accuracy" (correctness) quite carefully; following what you write.    If you should see an error with the usage on that page, please let me know.


To Hein: thanks for the link.  I've added that as a comment to the web page <http://labs.hoffmanlabs.com/node/735> the OP was copying text from.


To the OP: In general, what is the application?  If you're writing stuff to a log file, timestamps - whether precise or not, and whether accurate or not - tend to be problematic at best.  It can be better to use a counter as the identifier, and maintain the timestamp separately within the record.  The counter gives you the order, while the timestamp gives you an index into the host's view of "when" something happened.


If you're dealing with different sorts of tracking, there might be different time bases that could be useful.


VMS timekeeping is somewhat of a mess <http://labs.hoffmanlabs.com/node/124>, and it  pays to know the details of that.  While the VMS clock-drifting mechanism won't ever set the time backwards, it does alter the "ticking" to improve the accuracy.  The clock can be set backwards by the operator, or by the daylight saving time mechanisms.  When working with a cluster, the member hosts within a cluster can have slightly different time values, which means that a time value used as an index might not have the intended result.


Within a cluster, it's also feasible to have two identical timestamps.  That's a shade easier with the lower-accuracy timestamps, but it's still possible with the higher-accuracy timekeeping. 


if you need accuracy, the VMS hardware and software clocks stink.  You'll need to use an external time base.


If you're looking at the precision, then maybe something like a GUID or UUID value might be a better choice as a "unique" number.  Or a counter.


And a request:  please consider reporting confusion about or errors with the text directly to the web site.  Whether reports of questions or confusion posted over here get noticed is far from certain.  (It was more a case of luck that I saw this, as I don't follow HPEB.)