Digital Signal Processing Reference
In-Depth Information
of concentrating most of their energy in a small fraction of their coefficients. This
is typically the case in applications such as acoustic echo cancellation [ 20 ] and
wireless channel identification [ 21 , 22 ]. In such systems, and assuming that the
initial condition is zero, it is natural to think that the larger coefficients will need
more time to converge than the ones which are very small or zero. As we saw in
Chap. 4 , the speed of convergence is governed by the step size
.In[ 23 ], Duttweiler
proposed to use different variable step sizes for each component of the adaptive filter
w
μ
(
n
)
. Then, the step size can be written as:
µ (
n
) =
diag
0 (
n
), μ 1 (
n
),...,μ L 1 (
n
)) .
(6.1)
The dynamics of each step size
μ i (
n
)
, i
=
0
,...,
L
1 is governed by the dynamics
of the adaptive filter w
itself. The specific mathematical details can be checked
in [ 23 ] but basically the step sizes
(
n
)
μ i (
n
)
are proportional to w i (
n
)
. In this way, the
largest coefficients in w
will have larger step sizes, improving their convergence
speed. When the true system to be identified is sparse, this choice could lead to
important savings in speed of convergence without compromising the steady state
behavior, or even improving it. Several variants of this idea exist [ 24 - 26 ]. In [ 27 ], the
potential sparsity of the system to be identified is exploited as a priori information,
leading to a more elegant formulation and the obtention of other algorithms. The idea
of exploiting sparsity as a priori information can be formalized using Riemannian
manifolds [ 28 , 29 ]. Recently, new results on sparse system identification have been
obtained using ideas fromcompressed sampling [ 30 ]. The reader can consult [ 31 - 34 ].
(
n
)
6.4 Robust Adaptive Filters
In real-world adaptive filtering applications, severe impairments may occur. Per-
turbations such as background and impulsive noise can deteriorate the perfor-
mance of many adaptive filters under a system identification setup. Consider for
example that the input-output pairs are related by the linear regression model
d
w T x
is additive noise. Consider also, for example,
LMS recursion, which can be written as:
(
n
) =
(
n
) +
v
(
n
)
, where v
(
n
)
w
(
n
) =
w
(
n
1
) + μ
x
(
n
)
e
(
n
).
(6.2)
w T
It is clear that e
is the usual misalignment
vector. Suppose then, that the adaptive filter has a given estimate of the true system
at a certain time step. If a large noise sample perturbs it, the result will be a large
change in the system estimate:
(
n
) = ˜
(
n
)
x
(
n
) +
v
(
n
)
, where
w
˜
(
n
)
w
(
n
)
w
(
n
1
) ≈|
v
(
n
) | .
(6.3)
Search WWH ::




Custom Search