Digital Signal Processing Reference
In-Depth Information
where L
1. Then, according to the data model in (2.1), the l th data
snapshot y l canbewritten as
=
N
M
+
e j ω l
y l
= α
(
ω
) a (
ω
)
·
+
e l (
ω
)
,
(4.3)
where a (
ω
)isan M
×
1vector given by (2.5) and e l (
ω
)
=
[ e l (
ω
) e l + 1 (
ω
)
···
)] T . The APES algorithm mimics a ML approach to estimate
e l + M 1 (
ω
α
(
ω
)by
assuming that e l (
1, are zero-mean circularly symmetric com-
plex Gaussian random vectors that are statistically independent of each other and
have the same unknown covariance matrix
ω
)
,
l
=
0
,
1
,...,
L
E e l (
) .
) e l
Q (
ω
)
=
ω
(
ω
(4.4)
Then the covariance matrix of y l canbewritten as
= α
)
2
) a H (
R
(
ω
a (
ω
ω
)
+
Q (
ω
)
.
(4.5)
L
1
Since the vectors
0 in our case are overlapping, they are not statistically
independent of each other. Consequently, APES is not an exact ML estimator.
Using the above assumptions, we get the normalized surrogate log-likelihood
function of the data snapshots
{
e l (
ω
)
}
l
=
{
y l }
as follows:
} α
ln Q (
)
L
1
y l
) e j ω l H
1
L ln p (
1
L
{
y l
(
ω
)
,
Q (
ω
))
=−
M ln
π
ω
α
(
ω
) a (
ω
l
=
0
Q 1 (
) e j ω l ]
×
ω
)[ y l
α
(
ω
) a (
ω
(4.6)
tr Q 1 (
ln Q (
)
L
1
1
L
=−
π
ω
ω
M ln
)
l
=
0
H
,
y l
) e j ω l y l
) e j ω l
α
(
ω
) a (
ω
α
(
ω
) a (
ω
(4.7)
where tr
{ · }
and
|·|
denote the trace and the determinant of a matrix, respectively.
For any given
α
(
ω
) , maximizing (4.7) with respect to Q (
ω
) gives
L
1
1
L
Q
) e j ω l ][ y l
) e j ω l ] H
(
ω
)
=
[ y l
α
(
ω
) a (
ω
α
(
ω
) a (
ω
.
(4.8)
α
l
=
0
Search WWH ::




Custom Search