Digital Signal Processing Reference
In-Depth Information
|
1
μλ R | <
1
(3.55)
for all eigenvalues of R . The most stringent case will be reached when the
largest eigenvalue is considered, i.e.,
1
μλ R max <
1
(3.56)
which means that
1
<
1
μλ R max <
1
(3.57)
Since both μ and λ R max are nonnegative, the relevant inequality is
1
<
1
μλ R max
(3.58)
which yields
2
λ R max
μ
<
(3.59)
This condition relates the convergence of the algorithm to a statistical
property of the input signal. This is a particular characteristic of gradient-
based iterative techniques. Let us now consider an example to complete this
topic.
Example 3.3 (Channel Equalization Revisited)
Let us return to the noiseless scenario of Example 3.1 , but now our aim is to search
for the optimal filter using the steepest-descent approach. First, let us verify the
step-size upper bound. An analysis of the correlation matrix presented in (3.21)
yields
2
1.56 μ <
λ R max =
1.56
μ <
1.282
(3.60)
Then, we arbitrarily choose two step-sizes, μ = 0.1 and μ = 1. We also consider
the initial condition w
T and a number of 1000 iterations per run. In
Figure 3.8 , we present the time evolution of the coefficients for both step-sizes.
The figure clearly shows that in the case of the steepest-descent algorithm,
the step-size is basically related to the convergence rate of the algorithm: the
larger the step-size, the faster the convergence (within the stability bounds). In
other words, the step-size regulates the characteristics of the transient response of
the dynamic system, whereas the Wiener solution to be reached determines the
equilibrium point to which the algorithm converges.
Another interesting way to study the evolution of the coefficients is to analyze
it against the frame of the contours of the MSE cost function. In Figure 3.9 we plot
the trajectories associated with both choices of the step-size. These trajectories
reveal a clear limitation of the steepest-descent method, which arises from its
(
0
) =[
0,0
]
 
 
Search WWH ::




Custom Search