Digital Signal Processing Reference
In-Depth Information
that window via a batch processing algorithm [e.g., 3]. For each new block, a new equal-
izer could be computed from scratch, and that equalizer could be used to equalize the
data in that particular block. Alternatively, an adaptive algorithm can recursively com-
pute the equalizer by tweaking the coefficient values from the previous time step. The
former approach has the advantage of optimality when the channel truly is static and the
window is large enough to average out the noise. However, the complexity can be very
high, since optimal solutions generally require matrix inversions, computation of gener-
alized eigenvectors [3], or singular value decompositions [4], depending on the problem,
and these must be repeated every block. The latter approach has the advantage of mak-
ing use of the solution from the previous time step, and the complexity is usually limited
to computation of a matrix-vector multiply, or often only vector-vector and vector-scalar
multiplies, and moreover, the complexity is evenly spread out over time (rather than as a
lump sum at initialization). However, convergence to a good value often requires many
more data samples than optimal batch processes. Thus, adaptive equalizers are by no
means universally superior, but they are often preferred when computational power is at
a premium and mobility is high, e.g., in a mobile handset.
9.1.3 History of Adaptive Equalizers
Adaptive equalizers, sometimes called automatic equalizers, have been in use since the
1960s [5, 6]. Of particular note is the introduction of the least mean square (LMS) algo-
rithm, sometimes called the Widrow-Hoff algorithm [5, 7]. LMS is still used today as
a benchmark for comparison of adaptive equalizers, due to its low complexity and its
convergence to the minimum mean squared error (MMSE) equalizer for a static chan-
nel. Research on adaptive equalization became more widespread in the 1970s, motivated
by the need to equalize the impulse responses of telephone lines [8-10]. This research
focused on the comparison of different cost functions, and on hybrid equa lizer structures,
such as the combination of a partial equalizer and a reduced-complexity MLSE. However,
this research typically assumed the availability of a sufficiently long training signal, which
reduces the channel throughput, and is not even available in surveillance environments.
A “blind” (or “self-recovering”) equalizer is one that relies on known statistical prop-
erties of the transmitted signal, rather than on a training signal. A blind, adaptive equal-
izer was first introduced in 1975 in [11], which replaced the training signal in the LMS
algorithm with the output of a decision device at the receiver. This idea was later termed
decision direction (DD). However, it is dependent on the ability to make good decisions
at initialization, which is not always the case. A more sophisticated blind equalizer, the
constant modulus algorithm (CMA), was introduced in the early 1980s [12-14]. CMA
assumes the transmitted data has a constant modulus, and the equalizer attempts to
restore this property. However, CMA can be extended to non-constant modulus sources
[15], in which case it may be viewed as a dispersion (or effective noise power)-minimizing
algorithm. Despite the age of DD and CMA, most blind adaptive equalizers proposed
more recently are rooted in these two algorithms. Exceptions often involve finding alter-
nate signal properties to restore.
Since the mid-1980s, adaptive equalizer research has focused less on development
of new algorithms and more on either characterizing popular algorithms or tweaking
 
Search WWH ::




Custom Search