Information Technology Reference
In-Depth Information
Basic Property.
The conditional law of a gaussian vector given a linear
statistic is gaussian. Therefore, the MAP estimate is equal to the mean-square
estimate (actually the conditional expectation), and is linear.
In that framework, let us write the state equation (Markov stochastic
process)
X
(
k
+1)=
AX
(
k
)+
Bu
(
k
)+
V
(
k
+1)
and the measurement equation
Y
(
k
)=
HX
(
k
)+
W
(
k
)
.
Note that the state and observation variables are written with capital
letters because it is the current notation for random variables. The sequence
of random vectors [
V
(
k
)] is a vector discrete time white gaussian noise, i.e.,
a sequence of centered independent, identically distributed, gaussian random
vectors. Their common covariance matrix is
Q
. That sequence stands for
the state noise, i.e., the model uncertainty. The sequence of random vectors
[
W
(
k
)] is also a discrete-time gaussian white noise. Its covariance matrix is
R
.
It is a model for the measurement noise. The state noise and the measurement
noise are independent.
The filtering problem consists in reconstructing, at time
k
+ 1, the current
state given the past or present measurements. The available information is
gathered in the vector
y
(
k
+
1
)=[
y
(
1
)
,...,
y
(
k
+
1
)]. The criterion is the
quadratic difference between the estimate
X
(
k
+ 1) and the true value of the
state
X
(
k
+1).
It is a classical estimation problem in the linear gaussian model. It has been
stated that the optimal solution
X
(
k
+1) is the linear regression of the random
state
X
(
k
+1) onto the random vector
Y
(
k
+1) = [
Y
(1);
...
;
Y
(
k
+1)], which
stands for the available information.
In order to compute the linear regression, let us split the vector
Y
(
k
+1)
into the sum of two uncorrelated random vectors, the vector
Y
(
k
) and the
residual of
Y
(
k
+1) onto
Y
(
k
). Then, the linear regression onto the vector
Y
(
k
+ 1) will be the sum of the two linear regressions onto its uncorrelated
components (from the orthogonal projection theorem). Therefore, we can first
compute the regression of the current measurement
Y
(
k
+1) onto
Y
(
k
). We
start from
Y
(
k
+1)=
HX
(
k
+1)+
W
(
k
+1)
=
HAX
(
k
)+
HBu
(
k
)+
HV
(
k
+1)+
W
(
k
+1)
.
Because
V
(
k
+1) and
W
(
k
+ 1) are independent from the past (from
the white noise assumption), the regression is equal to
HA X
(
k
)+
HBu
(
k
)
where
X
(
k
) is the optimal estimate of
X
(
k
)given
Y
(
k
).
Search WWH ::
Custom Search