Digital Signal Processing Reference
In-Depth Information
the EMSE as 16 :
eig min [ R x ] E
2
eig max [ R x ] E
2
˜
w
(
n
1
)
ξ(
n
)
˜
w
(
n
1
)
,
(4.119)
E ˜
2 . Equation ( 4.119 )
where we used the fact that tr [ K
(
n
1
)
]
=
w
(
n
1
)
is also valid when n
, provided that the adaptive filter is stable in the MSD
sense. This means that the steady state MSDwill have almost the same behavior with
respect to the step size as the final EMSE (at least we can bound it uniformly from
below and above with respect to the EMSE behavior as a function of the step size).
If the input is such that R x = σ
→∞
x I L ,
x E
2
2
ξ(
n
) = σ
˜
w
(
n
1
)
,
(4.120)
which means that for white input regressors we can know exactly the behavior of the
final MSD from the asymptotic behavior of
.
The interesting thing however, is that for the final EMSE we do not need to resort
to the full independence assumption between the input regressors that was used for
the MSD stability analysis. In order to provide a unified treatment to the steady state
behavior and make the results valid for a large class of algorithms, we will continue
with the general model assumed in ( 4.55 ), which is reproduced here in the following
equivalent form:
ξ(
n
)
w
˜
(
n
) = α ˜
w
(
n
1
) μ
f
(
x
(
n
))
e
(
n
) + (
1
α)
w T .
(4.121)
Squaring both sides we obtain:
2
2
2
w T
˜
w
(
n
)
= α
˜
w
(
n
1
)
2
α ˜
(
n
1
) μ
f
(
x
(
n
))
e
(
n
)
w T ˜
+
2
α(
1
α)
w
(
n
1
)
f T
2 f
e 2
f T
+
(
x
(
n
)) μ
(
x
(
n
))
(
n
)
2
(
1
α)
(
x
(
n
)) μ
w T e
(
n
)
2
2
+ (
1
α)
w T
.
(4.122)
η μ f
w T
If we define
(
n
) = ˜
(
n
1
) μ
f
(
x
(
n
))
and apply the expectation operator on
both sides we obtain:
E
2
2 E
2
E
η μ f
˜
w
(
n
)
= α
˜
w
(
n
1
)
2
α
(
n
)
e
(
n
)
E
w T E ˜
) +
f T
2
μ
2 e 2
+
2
α(
1
α)
w
(
n
1
(
x
(
n
))
(
n
)
E f T
2
2
2
(
1
α)
(
x
(
n
)) μ
w T e
(
n
)
+ (
1
α)
w T
,
(4.123)
16 For this, we need R x to be strictly positive definite, which was assumed in Sect. 4.5.1 .
Search WWH ::




Custom Search