Digital Signal Processing Reference
In-Depth Information
7.4. Link with a maximum likelihood estimator
By comparing the form of the equations, Capon [CAP 69] and Lacoss [LAC 71]
concluded that the minimum variance method was equivalent to a maximum
likelihood method. Since 1977, Lacoss recognized the “strange” meaning of this
terminology, but he wrote that it is “too late” to change it [LAC 77]. Eleven years
later, Kay examined the difference between these two estimators again [KAY 88].
Let us consider the simple case where the signal x ( k ) is a complex exponential
function at the frequency f exp embedded in an additive complex white, Gaussian,
centered noise b ( k ) of variance σ 2 . Let C c be the complex amplitude of this
exponential, the signal is described in the following vector form:
XE B
[7.30]
=
C
+
cf
exp
where
and B are defined in equation [7.7].
XE
,
f
exp
If the frequency f exp is supposed to be known, it is possible to calculate an
estimation C c of the complex amplitude C c by the maximum likelihood method.
Because the noise b ( k ) is Gaussian, maximizing the probability density of
(
XE means minimizing the quantity (
)
)
H
(
)
1
C
XE
C
RXE
C
cf
cf
b
cf
exp
exp
exp
with R b the noise correlation matrix. The minimization leads to the following result
[KAY 88]:
1
RE
bf
C
H
exp
=
HX
with
H
=
[7.31]
c
H
f
1
ERE
b
f
exp
exp
The bias and the variance of C c are written:
()
C
H
[7.32]
E
=
HE
C
=
C
c
c
f
c
exp
1
()
()
2
ˆ
ˆ
ˆ
H
var C
=
E
C
E
C
=
HRH
=
[7.33]
c
c
c
b
H
f
1
ERE
b
f
exp
exp
The maximum likelihood estimator can be interpreted as the output of a filter
whose input is the signal x ( k ) and of impulse response H . Another way of
proceeding leads to the same expression of the estimator C c . It is about minimizing
the filter output variance given by the first part of equation [7.33] by constraining
 
Search WWH ::




Custom Search