Digital Signal Processing Reference
In-Depth Information
setup case), and 460 ps of worst-case transmitter delay variation for the data edge
that follows the clock (hold case). The spec allows for 442.5 ps and 267.5 ps
of interconnect delay variation in the setup and hold cases, respectively. The
timings are depicted graphically in Figure 13-5b. The total window for setup and
hold time at the receiver is 295 ps. This defines a minimum UI that would give
a maximum transfer rate of approximately 3.4 Gb/s if there were no variation in
the transmitter and interconnect delays. However, the transmitter and interconnect
delay variations add a total of 1560 ps to the unit interval, degrading the maximum
transfer rate to the 533 Mb/s final value.
Although the worst case approach treats the sources of timing variation as
though they are bounded, in reality this is a faulty assumption. Some sources,
for example channel induced jitter due to intersymbol interference, are in fact
bounded. Others, however, such as phase-locked loop (PLL) jitter induced by
power supply noise, are random in nature. These sources are not bounded, but
instead typically fit a Gaussian distribution, in which the timing uncertainty is
described by
1
2 πσ RJ e t 2 / 2 σ RJ
RJ (t) =
(13-4)
where RJ( t ) is the probability of having a timing jitter of t ps due to a random
source and σ RJ is the root-mean-square timing uncertainty (a.k.a. jitter ) (ps).
With a Gaussian distribution, even very large uncertainties have a nonzero
(although extremely small) probability of occurrence, as Figure 13-6 shows. This
has the consequence of rendering the notion of worst-case timings meaningless.
Instead, we must interpret the timings in terms of the bit error rate, which is really
just a probability that the timing uncertainties exceed the width of the unit inter-
val. The significance of this discussion is that it implies that prior system designs
based on worst-case timings were not really designed to a worst case, since it
has no meaning. Why, then, did these designs work? The truth is that worst-case
timing-based systems were in reality designed to achieve immeasurably low BER.
To illustrate, we examine the mean time between errors for the AGP 8X
interface, assuming a BER of 10 18 . The AGP data bus is 32 bits wide, running
at 533 Mb/s. At the specified bit error rate, the mean time between errors would
be approximately two and one-half years for the entire bus (assuming that errors
on different data lines are uncorrelated). The 2.5-year estimate also assumes
continuous operation and 100% bus utilization. If we assume that the system is
a personal computer, and is only in use half of the time, the mean time between
errors increases to five years. If we assume further that the average trafficon
the bus is unlikely to exceed 50%, the mean time between errors increases to 10
years, which far exceeds the expected lifetime of the computer.
13.2.2 Bit Error Rate Analysis
As signaling speeds continue to increase, maintaining sufficient margins to guar-
antee immeasurable bit error rates becomes prohibitive. As a result, high-speed
Search WWH ::




Custom Search