Processing Time Values

Applications requiring reliable time synchronization, such as air traffic control and stock transactions, must have confidence that the system clock is correct within some bound relative to a given timescale such as UTC. There is a considerable body of literature that studies correctness principles with respect to various failure models such as fail-stop and Byzantine traitors. While these principles and the algorithms based on them inspire much confidence in a theoretical setting, most require multiple message rounds for each measurement and would be impractical in a large computer network such as the Internet.

Inspired by this work, a suite of correctness assertions has evolved over the years to bound the errors inherent in any configuration.The maximum error is inherited and augmented by each client along the synchronization path. This is a valuable insight since it permits strong statements about the correctness of the timekeeping system.

NTP is an exceedingly large and complex real-time system with intricately engineered algorithms and carefully structured protocol operations. There is an extensive suite of NTP grooming and mitigation algorithms that are the topic of the following topics. They select only the best server or combination of servers and the best samples from each server. In a very real sense, the NTP algorithms operate as a gigantic digital signal processor and utilize many principles of that engineering field, including linear and nonlinear signal processing and adaptive-parameter feedback loops.


By its very nature, clock synchronization is a continuous sampling process by which time offset samples are collected on a regular basis from each of possibly several servers. Accuracy can be improved if the samples from each server are processed by an engineered filter algorithm. Algorithms described in the literature are based on trimmed-mean and median methods.The algorithm accumulates time offset/delay samples in a window of several samples and selects the offset sample associated with the minimum delay.

Computer time is so precious and so many bad things can happen if the clock breaks or a hostile intruder climbs over the firewall that serious attention must be given to the issues of redundancy and diversity. Obviously, single points of failure in the network or server population must be avoided. More to the point, the select algorithm that sorts the truechimers, whose clocks gloriously tell the truth, from the falsetickers, whose clocks lie viciously, must be designed with verifiable correctness assertions. The computer science literature is stocked with algorithms that do this, but only a few are viable in a practical system. The one used in NTP finds the largest clique of truechim-ers in the server population.

Even under peacetime conditions, the truechimers surviving the select algorithm might have somewhat different time offsets due to asymmetric delays and network jitter. Various kinds of cluster and combine algorithms have been found useful to deliver the best time from the unruly bunch. The one used in NTP sorts the offsets by a quality metric, then repeatedly discards the outlier with the worst quality until further discards will not reduce the residual error or until a minimum number of servers remain. The final clock adjustment is computed as a weighted average of the survivors.

At the heart of the NTP synchronization paradigm is the algorithm used to adjust the system clock in accordance with the final offset determined by these algorithms. This is called the discipline algorithm or simply the discipline. Such algorithms can be classed according to whether they minimize the time offset, the frequency offset, or both. For instance, the discipline used in DTSS minimizes only the time offset.While the DTSS algorithm cannot remove residual errors due to systematic frequency offsets, the NTP algorithm is more complicated and less forgiving of design and implementation mistakes.

The NTP discipline algorithm functions as a feedback loop, with each round of measured offsets used to adjust the system clock time and frequency.The significant design parameter is the time constant, or responsiveness to time and frequency variations, which depends on the client poll interval. For typical computer clocks, the best accuracy is achieved when the poll interval is relatively small, but this can result in unacceptable network overhead. In practice and with typical network configurations, the optimal value varies between 1 min and 20 min for Internet paths. In some cases involving toll telephone modem paths, much longer intervals of a day or more are suitable with only moderate loss of accuracy.

Next post:

Previous post: