Digital Signal Processing Reference
In-Depth Information
Now our attention must be turned to two aspects that are really frequent
in many practical cases: the need for real-time operation, and the presence
of nonstationary signals. In fact, these two requests violate the assumptions
presented above. The real-time constraint will require methods that provide
a joint process of acquisition and optimization, while the nonstationary con-
text will inhibit the use of a closed-form solution, as in (3.14), since there
will be no sense in dealing with fixed values of statistical correlations. This
new scenario leads us to the frontier between optimal and adaptive filter-
ing, or rather, between methods that are based on closed-form solutions and
those based on iterative/recursive solutions for the linear filtering problem.
A classical and didactic way to verify this is to consider first a simple iterative
method to attain the Wiener solution.
3.3 The Steepest-Descent Algorithm
We are interested in establishing a kind of learning process that eventually
leads to the optimal solution. The answer to this question is directly related
to the optimization theory: in many practical problems, the only feasible
optimization approach is exactly to resort to iterative processes. To obtain
the Wiener solution was indeed part of an optimization task, albeit a closed-
form solution prevented us from considering iterative approaches. However,
in view of the questions we have just raised, it appears to be natural that we
have to turn our attention toward them.
The iterative approach to be considered now is based on a simple idea,
i.e., to use the gradient vector of the cost function as a guide to the learn-
ing process. This is the core of the steepest-descent approach [139], which
allows that a local minimum be found by taking successive steps in a direc-
tion opposite to that indicated by the gradient vector. Mathematically, the
steepest-descent algorithm is an iterative optimization process of the form
w
(
n
+
1
) =
w
(
n
)
μ
J
(
w
)
(3.47)
where
J
is the cost function to be optimized
μ is the step size
(
w
)
The application of this method within the Wiener filtering problem is
basically a matter of using the calculated gradient vector (3.12) in (3.47). This
leads to
(
n
+
) =
(
n
)
J MSE [ w
(
n
)
=
(
n
)
(
n
)
w
1
w
μ
]
w
2μ [ Rw
p ]
(3.48)
 
 
Search WWH ::




Custom Search