Digital Signal Processing Reference
In-Depth Information
and
cov µ | γ ,
θ i 1
K
=
) S g S g
) S g 1
S m
D i 1 (
) S m
S m
D i 1 (
D i 1 (
S g
D i 1 (
) S m
=
ω
ω
ω
ω
.
(5.43)
Expectation: Following the same steps as in (5.21) and (5.22), we obtain the
conditional expectation of the surrogate log-likelihood function in (5.40):
E 1
)
Q i 1 (
i
1 (
L ln p ( γ , µ | α
(
ω
)
,
Q (
ω
))
| γ ,
α
ˆ
ω
)
,
ω
tr 1
) S m KS m
1
L ln
L D 1 (
=−
M ln
π
|
D (
ω
)
|−
ω
)] H
+
[ S g γ
+
S m b
α
(
ω
) ρ (
ω
)][ S g γ
+
S m b
α
(
ω
) ρ (
ω
+
C J
.
(5.44)
Maximization: To maximize the expected surrogate log-likelihood function
in (5.44), we need to exploit the known structure of D (
ω
) and ρ (
ω
). Let
S g γ +
z 0
.
z L 1
S m b
(5.45)
denote the data snapshots made up of the available and estimated data samples,
where each z l , l
=
0
,...,
L
1, is an M
×
1vector. Also let Γ 0
,..., Γ L 1 be the
M blocks on the block diagonal of S m KS m . Then the expected surrogate
log-likelihood function we need to maximize with respect to
M
×
α
(
ω
) and Q (
ω
)
becomes (to within an additive constant)
tr Q 1 (
H
Γ l
L
1
+ z l
) e j ω l z l
) e j ω l
) 1
L
ln
|
Q (
ω
)
|−
ω
α
(
ω
) a (
ω
α
(
ω
) a (
ω
.
l
=
0
(5.46)
The solution can be readily obtained by a derivation similar to that in Section 5.3:
) S 1 (
a H (
ω
ω
) Z (
ω
)
α
ˆ
2 (
ω
)
=
(5.47)
) S 1 (
a H (
ω
ω
) a (
ω
)
Search WWH ::




Custom Search