Digital Signal Processing Reference
In-Depth Information
ple, in an acoustic echo cancelation application, the input signals are speech signals,
which are nonstationary signals, and the room acoustic impulsive response (the sys-
tem to be identified by the adaptive filter) could be a time variant linear system [ 47 ].
This leads to the issue of how well an adaptive algorithm is able to track and adapt to
these variations. It can be shown [ 19 , 45 ] that, as in the case of the transient behavior,
there is also a tradeoff between the steady state behavior and the tracking behavior.
As
increases, the algorithm is able to react faster to changes in the signal statistics
or to changes in the system w T to be identified, but its steady state performance
worsens. Several interesting approaches [ 48 ] exist to calculate expressions where
this tradeoff can be quantified without needing to resort to a full transient analysis. In
fact, the approach taken in Sect. 4.5.3 can be easily modified to include the tracking
behavior of a large family of stochastic gradient adaptive filters. For more in depth
discussions of this important topic, the interested reader can see [ 19 , 45 ].
μ
4.6 Affine Projection Algorithm
μ =
δ =
From ( 4.23 ) it has been seen that when
0, the NLMS projects at each
time step the weight error vector onto the orthogonal space spanned by the regressor
x
1 and
1 eigenvalues equal to one and the remaining one is
equal to zero. Now, what would happen if the projector has K eigenvalues equal to
zero and L
(
n
)
. The projector has L
K equal to one? In this case, extending the discussion from Sect. 4.2.2 ,
it can be expected an increase on the average speed of convergence. This is the main
idea behind the Affine Projection algorithm (APA), originally introduced in [ 49 ]. The
cost behind this is an increase in computational complexity and possibly steady state
error. However, when colored input signals are used, the convergence speed of LMS
and NLMS could be so deteriorated [ 45 ] that the APA appears as a very appealing
alternative.
4.6.1 APA as the Solution to a Projection Problem
First, we introduce the L
×
K data matrix X
(
n
) =[
x
(
n
)
x
(
n
1
) ···
x
(
n
K
+
1
) ]
,
T ,the
the K
×
1 desired output vector d
(
n
) =[
d
(
n
)
d
(
n
1
) ···
d
(
n
K
+
1
) ]
apriori 18 output estimation error vector e
X T
(
n
) =
d
(
n
)
(
n
)
w
(
n
1
)
, and the a
X T
posteriori output estimation error vector e p (
.
Following the same ideas used for the NLMS, the regularized APA comes as the
solution to the problem of finding the estimate w
n
) =
d
(
n
)
(
n
)
w
(
n
)
(
)
n
that solves:
18
2
It should be emphasized that e ( n )
is not the same as the sum of the squares of the last K
n
i
output estimation errors, { e ( i ) }
1 . Each component of the vector e ( n ) is computed using the
=
n
K
+
same filter estimate w ( n 1 ) .
Search WWH ::




Custom Search