Digital Signal Processing Reference
In-Depth Information
()
()
x
x
0
1
x N
=
(
)
xN
1
The vector x N is considered random; for example, it can come from the sampling
of a random process or can be written as the combination of a deterministic signal to
which is added a random additive noise. The natural description of x N is given by the
probability density function (PDF), p ( x N ; θ) which is parameterized by
θ = 1 , θ 2 θ p ] T , an unknown vector for which we want to estimate the p
components. This means that we have a class of different PDF according to the
value of θ. Thus it is evident that as θ influences p ( x N ; θ), we must be able to infer
the value of θ from x N . Thus we consider an estimator
ˆ θ of θ from x N , that is to
say:
ˆ = fx
( )
[3.1]
θ
N
N
ˆ θ being a function of x N is thus a random vector itself. If the distribution of
ˆ θ provides a complete description of this vector, for estimation we are interested
in the following two quantities.
The bias represents the average error and is given by:
( ) { }
ˆ
ˆ
b
E
θ
[3.2]
θ
=
θ
N
N
It is desirable to have unbiased estimators, that is to say estimators which
provide an exact mean value.
The covariance matrix represents the dispersion around the mean value and its
definition is:
T
()
{}
{}
ˆ
ˆ
ˆ
⎤⎡
ˆ
ˆ
C θ
=
E
θ
E
θ
θ
E
θ
[3.3]
N
N
N
N
N
⎦⎣
The diagonal elements of the covariance matrix correspond to the respective
variances of different elements constituting
(
)
ˆ
ˆ
( )
( )
θ θ . The terms
outside the diagonal give an indication of the degree of correlation between the
estimates of different elements. It goes without saying that the smaller the variance,
the better the estimation.
:
C
kk
,
=
k
var
N
N
Search WWH ::




Custom Search