Digital Signal Processing Reference
In-Depth Information
that is to say:
1
⎧
1
⎫
T
−
1
(
)
()
()
()
p
x
;
θ
=
exp
−
⎡
x
−
s
θ
⎤
R
θ
⎡
x
−
s
θ
⎤
⎨
⎬
⎣
⎦
⎣
⎦
N
N
N
N
N
N
1/ 2
N
/2
2
()
()
⎩
⎭
2π
R
θ
N
()
() ()
with
R
N
θ as the determinant of the matrix
R
θ
.
s
θ
denotes the mean
N
N
()
(which can be a useful deterministic signal) and
R
N
θ
the covariance matrix. Thus,
by applying theorem 3.1, we can obtain the following simplified expression:
T
()
()
⎡
⎤
⎡
∂
s
θ
⎤
∂
s
θ
N
−
1
N
()
()
F
θ
=
R
θ
⎢
⎥
⎢
⎥
N
N
ij
∂θ
∂θ
⎢
⎥
⎣
⎦
i
⎣
j
⎦
()
()
⎧
⎫
∂
R
θ
∂
R
θ
1
Tr
2
⎪
⎪
−
1
N
−
1
N
()
()
+
R
θ
R
θ
⎨
⎬
N
N
∂θ
∂θ
⎪
⎪
⎩
i
j
⎭
N
k
where
{}
( )
=
∑
Tr
A
A
kk
,
stands for the trace of the matrix
A
.
=
1
We now consider the two following cases.
Case 1
The signal is the sum of a useful deterministic signal
()
s
N
θ
and a noise whose
()
covariance matrix
R
θ =
R
does not depend on θ.
N
N
Case 2
The signal is a random process with zero mean [
()
≡
s
N
θ
0
] and covariance
()
matrix
R
N
θ
.
Let us consider the first case. Thus, from the previous formula, we obtain:
T
N
()
()
∂
s
θ
∂
s
θ
−
1
N
()
F
θ
=
R
N
N
T
∂
θ
∂
θ
as the covariance matrix does not depend on θ. Now let us concentrate on the
existence
of an effective estimator. We can write:
(
)
T
()
T
()
∂
ln
p
x
;
θ
∂
s
θ
∂
s
θ
N
N
−
1
N
−
1
()
=
R
x
−
R
s
θ
NN
NN
∂
θ
∂
θ
∂
θ
Search WWH ::
Custom Search