Digital Signal Processing Reference
In-Depth Information
E
E
2
2
μ
2 e 2
2
μ
2
v 2
f
(
x
(
n
))
(
n
)
=
f
(
x
(
n
))
η
(
n
) +
2
η(
n
)
v
(
n
) +
(
n
)
E
v E
2
2
μ
2
2
2
μ
=
f
(
x
(
n
))
η
(
n
)
+ σ
f
(
x
(
n
))
,
(4.128)
2
where we used E
2
μ
(
) )
η(
)
(
)
=
f
(
x
n
n
v
n
0. With the reasoning used above we
2
can write E
E
2 E η
) ,so( 4.124 ) can be
2
μ
2
2
μ
2
f
(
x
(
n
))
η
(
n
)
f
(
x
(
n
))
(
n
2
put as:
E
E ˜
2
w T E ˜
)
η μ f
2
(
1
α
)
w
( )
lim
n
2
α
(
n
)
e
(
n
)
+
2
α(
1
α)
w
(
n
1
→∞
E
2
2
μ
w T μ
+
f
(
x
(
n
))
ξ(
n
)
2
(
1
α)
E [ f
(
x
(
n
))
] E [
η(
n
)
]
v E
2
2
2
2
μ
2
+ σ
f
(
x
(
n
))
+ (
1
α)
w T
.
(4.129)
From this equation we can obtain steady state results for almost all the algorithms
presented throughout this chapter.
η μ f
,so( 4.129 ) takes the form 17
LMS algorithm : In this case
(
n
) = μη(
n
)
2 E
2
v E
2
2
2
0
=−
2
μξ + μ
x
(
n
)
ξ + μ
σ
x
(
n
)
,
(4.130)
where we denoted
ξ =
lim n →∞ ξ(
n
)
. Rearranging terms in ( 4.130 ) we obtain:
2
v
μ
tr [ R x ]
σ
ξ =
tr [ R x ] ,
(4.131)
2
μ
where we have used that E
2 =
x
(
n
)
tr [ R x ]. Notice that the final EMSE is
increasing with
and the variance of the noise. This means that in order to have a
small EMSE the value of
μ
μ
should be small. In fact, if
μ
is small the final EMSE
can be approximated by:
v
ξ μ
tr [ R x ]
σ
.
(4.132)
2
From ( 4.131 ) we can also see that there is a maximum value
ξ becomes
infinite, and after which it would take negative values, which is meaningless. We
see that this value of
μ
before
coincides with the approximate upperbound we derived in
( 4.106 ), which is satisfactory. Obviously, from ( 4.117 ) and ( 4.131 )wealsohave:
μ
E
2
v
μ
tr [ R x ]
σ
2
v
J
lim
n
|
e
(
n
) |
= σ
+
tr [ R x ] .
(4.133)
2
μ
→∞
17 Although in ( 4.130 ) we should put , in an abuse of notation we state it as an equality.
 
Search WWH ::




Custom Search