Digital Signal Processing Reference
In-Depth Information
The evolution of the top entry of
W
i describes the mean-square deviation of
2 , while the evolution of the second entry of
the filter, E
w i
W
i relates the
learning behavior of the filter because
E e 2
E e a (
2
2
2
(
i
) =
i
) + σ
v =
E
w i 1
R u + σ
v .
1.12.5
Long Filter Approximation
The earlier results on filters with error nonlinearities can be used to provide
an alternative simplified analysis of adaptive filters with data nonlinearities
as in Equation 1.3. We did this in Sections 1.8 and 1.10 by resorting to sim-
plifications that resulted from the small-step-size and fourth-order moment
approximations.
Indeed, starting from Equation 1.10, substituting e p (
e a (
i
)
in terms of
{
i
)
,
e
from (Equation 1.8), and taking expectations, we arrive at the variance
relation
(
i
) }
E e a (
2 E
2
e 2
i
)
e
(
i
)
u i
(
i
)
2
2
E
w i
=
E
w i 1
2
µ
+ µ
.
(1.75)
g [ u i ]
g 2 [ u i ]
This relation is equivalent to Equation 1.13, except that in Equation 1.13 we
proceeded further and expressed the terms e a (
and e 2
as weighted
norms of w i 1 . Relation 1.75 has the same form as variance relation 1.58 used
for filters with error nonlinearities. Observe in particular that the function
e
i
)
e
(
i
)
(
i
)
/
g [ u ] in data-normalized filters plays the role of
f [ e ] in nonlinear error
filters.
Now by following the arguments of Section 1.12.1, and under the following
assumptions:
and e a (
e a
(
i
)
i
)
are jointly Gaussian random variables ,
2
(
)
u i
and g [ u i ] are independent of
e
i
,
and
(1.76)
the regressors u i are independent and identically distributed ,
we can evaluate the expectations
E e a (
E
,
2
e 2
i
)
e
(
i
)
u i
(
i
)
and
g [ u i ]
g 2 [ u i ]
and conclude that variance Relation 1.75 reduces to
2 E
E e a (
h G E e a (
) + µ
2
g 2 [ u i ]
,
u i
2
2
2
v
E
w i
=
E
w i 1
2
µ
i
)
e a (
i
i
) + σ
Search WWH ::




Custom Search