Digital Signal Processing Reference
In-Depth Information
has also been a prolific field of research in the last decades (see [201] for a
more detailed account of the subject).
A complete historical analysis of the development of the neural
approaches is beyond the scope of the topic, but it is worth presenting
a brief overview on the subject; a more detailed view can be found, for
instance, in [136]. The seminal paper by McCulloch and Pitts [206] estab-
lished the most relevant relationship between logical calculus and neural
network computation and gave rise to a mathematical model of the neuron.
In 1949, another decisive step was taken when Donald Hebb's book [142], The
Organization of Behavior , related the learning process to synaptic modifica-
tions controlled by the firing patterns of interrelated neurons.
In 1958, Frank Rosenblatt proposed a neural network for pattern recog-
nition that was called perceptron. The perceptron was, in simple terms,
an adaptive linear classifier whose learning algorithm was founded on a
beautiful mathematical result, the perceptron convergence theorem, which
ensured proper operation for linearly separable patterns. Interestingly, in
1960, Widrow and Hoff used the LMS algorithm to adapt the parameters of a
perceptron-like structure, originating the Adaline (adaptive linear element).
After a period in which the interest for research in neural networks suf-
fered a decrease—in spite of important contributions related, for instance,
to self-organization [136]—the field experienced a sort of revival with
proposals like the Hopfield network and the backpropagation algorithm
(BPA) [256, 257, 302], a fundamental result to the effective application of
MLPs. Later, in 1988, Broomhead and Lowe [48] were responsible for
introducing a new multilayer structure, the RBF network, which is also
considered a fundamental neural approach for function approximation.
The revival of the field of neural networks in the 1980s led to a significant
interest for the use of these structures in the context of signal processing.
A consequence of this tendency was the popularization of the view of the
equalization problem as a classification task [68]. As a matter of fact, a huge
number of works have applied neural networks in the context of equaliza-
tion. Among them, it is relevant to mention the seminal papers written by
Gibson and Cowan (1990) [122], Theodoridis et al. (1992) [282], and Chen
et al. [68,69].
7.1 Decision-Feedback Equalizers
In digital transmission, a classical nonlinear solution to the equalization
problem is the DFE, first proposed by Austin in 1967 [21]. The structure is
composed of two filters, a feedforward filter (FFF) and a feedback filter (FBF),
which are combined in a manner that also includes the action of a decision
device. Figure 7.1 brings a scheme of the structure.
 
 
Search WWH ::




Custom Search