Digital Signal Processing Reference
In-Depth Information
2
)
y
l
1
,
l
2
−
α
2
)
e
j
(
ω
1
l
1
+
ω
2
l
2
)
Q
−
1
(
ω
,ω
(
ω
,ω
2
)
a
(
ω
,ω
1
1
1
tr
Q
−
1
(
L
1
−
1
L
2
−
1
1
L
1
L
2
=−
M
1
M
2
ln
π
−
ln
|
Q
(
ω
,ω
2
)
|−
ω
,ω
2
)
1
1
l
1
=
0
l
2
=
0
y
l
1
,
l
2
−
α
2
)
e
j
(
ω
1
l
1
+
ω
2
l
2
)
(
ω
,ω
2
)
a
(
ω
,ω
1
1
2
)
e
j
(
ω
1
l
1
+
ω
2
l
2
)
H
y
l
1
,
l
2
−
α
(
ω
,ω
2
)
a
(
ω
,ω
.
(6.9)
1
1
Just as in the 1-D case, the maximization of the above surrogate likelihood
function gives the APES estimator
2
)
S
−
1
(
a
H
(
ω
,ω
ω
,ω
2
)
g
(
ω
,ω
2
)
1
1
1
α
ˆ
(
ω
,ω
2
)
=
(6.10)
1
2
)
S
−
1
(
a
H
(
ω
,ω
ω
,ω
2
)
a
(
ω
,ω
2
)
1
1
1
and
Q
(
S
(
ω
,ω
2
)
=
ω
,ω
2
)
+
[ ˆ
α
ML
(
ω
,ω
2
)
a
(
ω
,ω
2
)
−
g
(
ω
,ω
2
)]
1
1
1
1
1
2
)]
H
×
[ ˆ
α
ML
(
ω
,ω
2
)
a
(
ω
,ω
2
)
−
g
(
ω
,ω
,
(6.11)
1
1
1
R
,
g
(
S
(
where
ω
,ω
2
), and
ω
,ω
2
)are as defined in Section 3.3.1.
1
1
6.3 TWO-DIMENSIONAL MAPES VIA EM
Assume that some arbitrary elements of the data matrix
Y
are missing. Because of
these missing data samples, which can be treated as unknowns, the log-likelihood
function (6.8) cannot be maximized directly. In this section, we will show how
to tackle this missing-data problem, in the ML context, using the EM and CM
algorithms. A comparison of these two approaches is also provided.
6.3.1 Two-Dimensional MAPES-EM1
We assume that the data snapshots
Y
l
1
,
l
2
)are independent of each other,
and we estimate the missing data separately for different data snapshots. For each
data snapshot
y
l
1
,
l
2
, let
γ
l
1
,
l
2
and
µ
l
1
,
l
2
denote the vectors containing the available
and missing elements of
y
l
1
,
l
2
,respectively. Assume that
{
}
(or
{
y
l
1
,
l
2
}
γ
l
1
,
l
2
has dimension
g
l
1
,
l
2
×
1, where 1
≤
g
l
1
,
l
2
≤
M
1
M
2
is the number of available elements in the
Search WWH ::
Custom Search