Digital Signal Processing Reference
In-Depth Information
ˆ β ML
If f (.) is not reversible, then
is obtained as:
N
ˆ
ML
N
(
)
β
=
arg max
p
x
;
β
N
β
θ
(
)
() (
)
p
x
;
β
=
ax
p
x
;
N
N
θβ
;
=
f
θ
Theorem 3.2 is a direct consequence of theorem 3.1; it stipulates that the ML
estimator achieves the Cramér-Rao bounds (∀ N ) in the case where an efficient
estimator definitely exists. For example, this is the case when the signal follows a
linear model
xH b b is a random Gaussian vector of known
covariance matrix. Moreover, theorem 3.3 shows that the ML estimator is
asymptotically optimum. For example, in the case of ()
=
θ
+
where
N
N
N
N
dN
=
N
this signifies that:
(
) dist
(
)
ˆ
ML
N
1
()
N
θ
θ
.
0
,
F
θ
0
Finally, theorem 3.4 makes it possible to obtain the maximum likelihood
estimator of a function of θ. This theorem can also be applied when it is difficult to
find or directly implement
ˆ β ML
N
ˆ
ML
θ To
sum up, the ML estimator possesses optimality properties for finite N as well as
asymptotically. For a wide range of problems, it can be implemented; in particular, it
is often systematically used in the case of deterministic signals buried in additive
Gaussian noise of known covariance matrix. In the latter case, the ML estimator
corresponds to a non-linear least squares estimator. In fact, let us suppose that the
signal is distributed as:
but it turns out to be simpler to obtain
.
(
)
()
x N
s
θ
, R
N
N
with known R N . Then, the log-likelihood may be written as:
1
T
(
)
1
()
()
L
x
;
θ
=−
x
s
θ
R
x
s
θ
N
N
N
N
N
N
2
As a result, the maximum likelihood estimator of θ takes the following form:
T
ˆ
ML
N
1
()
()
θ
=
arg min
xs
θ
Rxs
θ
N
N
N
N
N
θ
which corresponds to the best approximation in the least squares sense of the data x N
by the model s N ( θ ). In this case we end up with a relatively simple expression. On
the contrary, when the covariance matrix is unknown and depends on θ , that is to
 
Search WWH ::




Custom Search