Digital Signal Processing Reference
In-Depth Information
“infinite amount” of signal samples to be computed (given it has the expecta-
tion operator in its definition). In practice, this ideal objective function can be
approximated by the LS, WLS, or ISV. They differ in the implementation com-
plexity and in the convergence behavior characteristics. In general, the ISV is
easier to implement but it presents noisy convergence properties, since it rep-
resents a greatly simplified objective function. The LS is convenient to be used
in stationary environments, whereas the WLS is useful in applications where
the environment is slowly varying or when the measurement noise is not white.
MAE is particularly useful in the presence of outliers and leads to computational
simple algorithms.
2. Definition of the error signal . The choice of the error signal is crucial for the
algorithm definition, since it can affect several of its characteristics, including
computational complexity, speed of convergence, and robustness.
3. Definition of the minimization algorithm . This item is the main subject of Opti-
mization Theory, and it essentially affects the speed of convergence and com-
putational complexity of the adaptive process. As we will see in Chap. 3 , Taylor
approximations of the cost function with respect to the update can be used. The
first-order approximation leads to the gradient search algorithm, whereas if the
second-order approximation is used, Newton's algorithm arises (if the Hessian
matrix is not positive definite, then the function does not necessarily have a
unique minimum). Quasi-Newton's algorithms are used when the Hessian is esti-
mated, but they are susceptible to instability problems due to the recursive form
used to generate the estimate of the inverse Hessian matrix. In these methods, the
step size controls the stability, speed of convergence, and some characteristics
of the residual error of the overall iterative process.
Several properties should be considered to determine whether or not a particular
adaptive filtering algorithm is suitable for a particular application. Among them, we
can emphasize speed of convergence, steady state error, tracking ability, robustness
against noise, computational complexity, modularity, numerical stability and pre-
cision [ 1 ]. Describing these features for a given adaptive filter can be a very hard
task from a mathematical point of view. Therefore, certain hypotheses like linearity,
Gaussianity and stationarity have been used over the years to make the analysis more
tractable [ 4 ]. We will show this in more detail in the forthcoming chapters.
1.2 Organization of the Work
In the next chapter we start studying the problem of optimum linear filtering with
stationary signals (particularly, in the mean-square-error sense). This leads to the
Wiener filter. Different values for the filter coefficients would lead to larger mean
square errors, a relation that is captured by the error performance surface.We will
Search WWH ::




Custom Search