Theory and Algorithms (Technical History of NTP) (Computer Network Time Synchronization)

As all this was going on, there was a good deal of excitement in the theoretical community developing abstract models and algorithms for time synchronization. The fundamental abstraction from which correctness principles are based is the happens-before relation introduced by Lamport [46] and Lamport and Melliar-Smith [47]. It demonstrates that clocks are required to determine a reliable time value if no more than 3m + 1 of them are falsetickers, but only 2m + 1 clocks are required if digital signatures are used, as is the case with Autokey.

Drawing from this work, a cascade of four algorithms was developed for NTP: the filter, select, cluster, and combine algorithms. In a series of experiments documented in RFC-956 [29], the algorithm that emerged computes the round-trip delay for each of the last eight measurement rounds and selects the offset associated with the minimum round-trip delay. This algorithm, somewhat modified, survives.

This algorithm, somewhat modified to reduce jitter, was added in NTPv3 and survives today. The cluster algorithm was added in NTPv2 but has been modified in significant ways since then. The fundamentals of computer clock discipline technology were presented in the 1992 report [49], which remains valid. That report set forth mathematically precise models for error analysis, transient response, and clock discipline principles.

In a series of careful measurements over a period of 2 years with selected servers in the United States, Australia, and Europe, an analytical model of the idiosyncratic computer clock oscillator was developed and verified. While a considerable body of work on this subject has accreted in the literature, with few exceptions the object of study has been precision oscillators of the highest quality used as time and frequency standards. Computer oscillators have no such pedigree since there are generally no provisions to stabilize the ambient environment, in particular the crystal temperature.


The algorithm design was considerably influenced by a collaboration with Judah Levine at NIST. Levine’s own lockclock algorithm, which is used in the NTP primary servers operated by NIST, is described in his article [51].This statistic is commonly displayed by a plot of stability versus averaging interval in log-log coordinates, as shown in Figure 12.2. Each Internet server is characterized by a straight line with slope -1, which is associated with white-phase noise. Each computer oscillator is characterized by a straight line with slope +0.5, which is associated with random-walk frequency noise. Knowing these two characteristics allows the optimum averaging interval to be determined for each combination of server and oscillator.

Its primary contribution is the Allan intercept model that characterizes typical computer oscillators. The Allan intercept is the point (x, y) where the straight-line asymptotes shown in Figure 12.2 for each individual source and oscillator intersect. This work resulted in a hybrid algorithm, implemented in NTPv4, which both improves performance over typical Internet paths and allows the message poll intervals to be substantially increased without degrading accuracy. A special-purpose simulator that included substantially all the NTP algorithms was used to verify predicted behavior with both simulated and actual data over the entire envelope of frenetic Internet behaviors.

Next post:

Previous post: