Digital Signal Processing Reference
In-Depth Information
b l from µ l in (5.14) as follows:
subtracting the conditional mean
S g ( l ) γ l
) e j ω l
S m ( l ) µ l
+
α
(
ω
) a (
ω
= S m ( l )( µ l
b l ) + S g ( l ) γ l
) e j ω l .
S m ( l ) b l
+
α
(
ω
) a (
ω
(5.21)
The cross-terms that result from the expansion of the quadratic term in (5.14)
vanish when we take the conditional expectation. Therefore the expectation step
yields
E 1
)
Q i 1 (
i
1 (
γ l ,
µ l }| α
γ l } ,
L ln p (
{
(
ω
)
,
Q (
ω
))
|{
α
ˆ
ω
)
,
ω
tr Q 1 (
S m ( l ) K l S m ( l )
l = 0
L
1
) 1
L
=−
M ln
π
ln
|
Q (
ω
)
|−
ω
) e j ω l H
+ S g ( l ) γ l +
) e j ω l S g ( l ) γ l +
S m ( l ) b l
S m ( l ) b l
α
(
ω
) a (
ω
α
(
ω
) a (
ω
.
(5.22)
Maximization: The maximization part of the EM algorithm produces up-
dated estimates for
α
(
ω
) and Q (
ω
). The normalized expected surrogate log-
likelihood (5.22) can berewritten as
tr Q 1 (
,
Γ l
l = 0
L
1
+ z l
) e j ω l z l
) e j ω l
) 1
L
H
M ln
π
ln
|
Q (
ω
)
|−
ω
α
(
ω
) a (
ω
α
(
ω
) a (
ω
(5.23)
where we have defined
Γ l
S m ( l ) K l S m ( l )
(5.24)
and
S g ( l ) γ l +
S m ( l ) b l
z l
.
(5.25)
According to the derivation in Chapter 4, maximizing (5.23) with respect to
α
(
ω
)
and Q (
ω
) gives
) S 1 (
) Z (
a H (
ω
ω
ω
)
α
ˆ
1 (
ω
)
=
(5.26)
) S 1 (
a H (
ω
ω
) a (
ω
)
Search WWH ::




Custom Search