Digital Signal Processing Reference
In-Depth Information
()
process. Thus,
N
→∞
F
,
θ becomes negligible compared to the second term and
we can write:
4
⎛
⎞
2
σ
0
1
( )
ˆ
⎜
⎟
C
θ
≥
N
−
1
N
⎜
⎟
0
R
⎝
⎠
p
Before concluding this first part, here is an interesting result by Whittle which
expresses the normalized asymptotic Fisher information matrix in certain cases. Let
us consider a stationary Gaussian random process of zero mean and power spectral
density
()
.
x
Sf
Then [DZH 86, POR 94]:
[ ]
−
1
()
F
lim
N
⎡
F
N
θ
⎤
⎣
⎦
0
k
,
k
,
N
→∞
[3.7]
() ()
∂
Sf
∂
Sf
1
1/ 2
1
x
x
=
∫
df
2
2
()
∂θ
∂θ
−
1/ 2
Sf
k
x
This formula helps us, in particular, obtain extremely simple expressions in the
case of ARMA process (see [FRI 84a, FRI 84b] for example).
3.1.3.
Sequence of estimators
The theoretical elements which have just been given are related to fixed
dimensional data vector
N
. In the context of random processes, we often study the
asymptotic behavior of estimators, that is to say when the dimension
N
of the data
vector increases
3
. This gives rise to an estimated sequence
ˆ
θ and we study the
asymptotic behavior of this sequence, that is to say when
N
→∞ Before this we
.
define the types of convergences considered.
Let
ξ
N
be a sequence of random variables and
a
N
a series of strictly positive real
numbers. We say that
ξ
N
converges in probability to 0 if, whatever δ > 0 may be:
{
}
≥=
lim
P
ξδ
0
N
N
→∞
3
Another more pragmatic reason is that we rarely know how to carry out a statistical analysis
with finite
Ν
and most of the results require the hypothesis of a large number of samples
[STO 98].
Search WWH ::
Custom Search