Digital Signal Processing Reference
In-Depth Information
and the derivative is evaluated at the true parameter θ. Furthermore, we can find an
estimator which reaches the bound, that is to say () ()
ˆ
1
C N
θ
θ if and only if:
(
)
ln
p
x
;
θ
N
(
)
() ()
[3.5]
=
F
θ
g
x
θ
N
θ
where g is a vector function. This estimator, which is the unbiased estimator with
minimum variance, is given by:
ˆ = gx
()
[3.6]
θ
()
and its covariance matrix is then
F N
θ . This estimator is called efficient.
COROLLARY 3.1. Suppose we want to estimate β such that β = f (θ) from x N . Then,
we have:
T
∂β
∂β
( )
ˆ
1
()
C
β
F
θ
0
N
N
T
θ
θ
Τ
ˆ θ is
∂β ∂θ is the Jacobian matrix whose ( )
where
/
k element is
∂β ∂θ If
/
.
k
( )
ˆ
ˆ
an efficient estimator and if the transformation f (θ) is linear, then
= f is
also efficient. However, if the transformation is non-linear, the efficiency is
generally not retained.
β
θ
N
N
The previous theorem indicates that, under a “generally verified” regularity
condition, there exists a lower bound for the covariance matrix. Furthermore, it
provides a necessary and sufficient condition for the existence of an efficient
estimator (thus with minimum variance). Nevertheless, it is quite rare that this
factorization condition is fulfilled and most often we cannot reach the Cramér-Rao
bound, except asymptotically, that is to say when the number of observations tends
towards infinity.
To illustrate the use of the previous theorem, we now consider the case of
Gaussian signals. In numerous applications, the signal is distributed according to a
normal law:
(
)
() ()
x N
s
θ
,
R
θ
N
N
 
Search WWH ::




Custom Search