Digital Signal Processing Reference
In-Depth Information
study this surface as it will be very useful for the subsequent chapters. We also include
and example where the Wiener filter is applied to the linear prediction problem.
In Chap. 3 we will introduce iterative search methods for minimizing cost func-
tions. This will be particularly applied to the error performance surface. We will
focus on the methods of Steepest Descent and Newton-Raphson, which belong to
the family of deterministic gradient algorithms. These methods uses the second order
statistics to find the optimal filter (i.e., the Wiener filter) but unlike the direct way
introduced in Chap. 2 , they find this solution iteratively. The iterative mechanism
will lead to the question of stability and convergence behavior, which we will study
theoretically and with simulations. Understanding their functioning and convergence
properties is very important as they will be the basis for the development of stochastic
gradient adaptive filters in Chap. 4
Chapter 4 presents the Stochastic Gradient algorithms. Among them, we find the
Least Mean Square algorithm (LMS), which is without doubt the adaptive filter that
has been used the most. Other algorithms studied in this chapter are the Normal-
ized Least Mean Square (NLMS), the Sign Data algorithm (SDA), the Sign Error
algorithm (SEA), and the Affine Projection algorithm (APA). We show several inter-
pretations of each algorithm that provide more insight on their functioning and how
they relate to each other. Since these algorithms are implemented using stochastic
signals, the update directions become subject to random fluctuations called gradient
noise. Therefore, a convergence analysis to study their performance (in statistical
terms) will be performed. This will allow us to generate theoretical predictions in
terms of the algorithms stability and steady state error. Simulation results will be pro-
vided to test this predictions and improve the understanding of the pros and cons of
each algorithm. In addition, we discuss the applications of adaptive noise cancelation
and adaptive equalization.
In Chap. 5 we will study the method of Least Squares (LS). In this case, a lin-
ear regression model is used for the data and the estimation of the system using
input-output measured pairs (and no statistical information) is performed. As the LS
problem can be thought as one of orthogonal projections in Hilbert spaces, inter-
esting properties can be derived. Then, we will also present the Recursive Least
Squares (RLS) algorithm, which is a recursive and more computational efficient
implementation of the LS method. We will discuss some convergence properties
and computational issues of the RLS. A comparison of the RLS with the previously
studied adaptive filters will be performed. The application of adaptive beamforming
is also developed using the RLS algorithm.
The final chapter of the topic deals with several topics not covered in the previous
chapters. These topics are more advanced or are the object of active research in the
area of adaptive filtering. We include a succinct discussion of each topic and provide
several relevant references for the interested reader.
Search WWH ::




Custom Search