Digital Signal Processing Reference
In-Depth Information
minimal and the convergence is the fastest. When the input signal is more colored ,
the dispersion between the eigenvalues is greater, i.e., R x = σ
x I L and
1,
and the convergence becomes slower. This characteristic is the same we saw in the
SD algorithm. As many real world signals are highly colored, e.g. speech signals,
the slow convergence might preclude some algorithms from being used in some
applications. In order to improve the convergence behavior of an adaptive algorithm
working with this kind of signals, some pre-whitening of the input signal can be done
before it enters into the adaptive filter [ 46 , 47 ] (using for example linear prediction as
in Sect. 2.5.1 ) . In fact, other adaptive algorithms are better suited for colored signals
as the Affine Projection algorithm (APA) and the Recursive Least Squares (RLS)
algorithm. Both algorithms will be presented later.
Also, general stochastic gradient adaptive algorithms present the behavior men-
tioned in Sect. 4.2.2 . That is, as the value of
χ (
R x )
decreases, the steady state error gets
smaller and the time required to reach that steady state gets larger, i.e., the speed of
convergence is reduced. In the same manner, as
μ
increases, the steady state error and
the speed of convergence increase as well. The behavior mentioned above, happens
μ<μ , where the exact value of
μ
μ depends on the particular algorithm consid-
μ the speed of convergence could be reduced and
the steady state error is increased. From this behavior we conclude that in practice,
the best of an adaptive algorithm can be obtained restricting the choice of the step
size to
μ
ered. When
is increased above
μ is difficult, not only for the mathematics and
assumptions needed to obtain a closed form expression, but also for the criterion used
to quantify the convergence speed of an adaptive algorithm. A reasonable criterion
would be to obtain the value of
]
(
0
. The obtention of
such that for the slower mode its convergence
speed is maximized, as we did for the SD algorithm in Sect. 3.2.1 . When the same is
done for the mean square behavior of the LMS, assuming that R x
μ
x I L and the
= σ
input is Gaussian, it can be shown that
1
μ =
(4.149)
x
x
2
σ
+
L
σ
which is the midpoint of the corresponding stability interval computed using ( 4.108 ).
Increasing
above that value, will decrease the convergence speed of the slowest
mode and increase the steady state error. Decreasing
μ
μ will also decrease
the convergence speed of the slowest mode but will decrease the steady state error.
This confirms, for this case, the fact that
μ
below
]
as
mentioned above. With extra effort, it can be proved that under the same conditions
and with sufficiently large L ,thevalueof
μ (
0
is the most useful range of
μ
μ for the NLMS algorithm is close to
one, confirming also the intuitive and heuristic discussion in Sect. 4.2.2 . In this way,
for the NLMS algorithm, the most useful range of
.
The tracking behavior of an adaptive filter is another important issue to be consid-
ered. As an adaptive algorithm works with the instantaneous data, it can potentially
track the statistical variations on it. This is one of the most useful properties of an
adaptive filter. In our analyses we considered that the signals are stationary and the
system w T does not vary with time. In practice, this is generally not true. For exam-
μ
is
(
0
,
1
]
Search WWH ::




Custom Search