Information Technology Reference
In-Depth Information
We define X ( k ) as the least squares optimal estimate, i.e., the linear re-
gression of the random state vector X ( k ) onto the random vector of past
measurements up to time k : Y ( k )=[ Y (1); ... ; Y ( k )]. Let ϑ ( k +1) be the
innovation at time k + 1. It is defined by
HA X ( k )
ϑ ( k +1)= Y ( k +1)
HBu ( k ) .
The innovation filter recursive equation is
X ( k +1)= A X ( k )+ Bu ( k )+ K k +1 ϑ ( k +1)
where the innovation gain is inferred from the computation formula of linear
regression:
K k +1 =Cov[ X ( k +1) , ϑ ( k + 1)]Var[ ϑ ( k +1)] 1 .
X ( k )and
P k stands for the covariance matrix of the estimation error X ( k )
P k +1 stands for the covariance matrix of the prediction error
A X ( k )
X ( k +1)
Bu ( k ) .
Let us compute covariance of the prediction error. One obtains
A X ( k )
X ( k )] + V ( k +1) .
X ( k +1)
Bu ( k )= A [ X ( k )
X ( k ), the prediction error covari-
ance propagation equation is easily computed using a quadratic expansion,
Because V ( k + 1) is uncorrelated to X ( k )
P k +1 = AP k A T + Q .
From the definition of innovation error,
HA X ( k )
ϑ ( k +1)= Y ( k +1)
HBu ( k )
X ( k )] + V ( k +1)
= H
{
A [ X ( k )
}
+ W ( k +1) .
The value of its covariance matrix is deduced in a similar way, expressed as a
function of the prediction error at time k
Var[ ϑ ( k +1)]= HP k +1 H T + R .
Let us compute the covariance between the state X ( k + 1) and the innovation
ϑ ( k +1),
HA X ( k )
Cov[ X ( k +1) , Y ( k +1)
HBu ( k )]
X ( k )]
=Cov
{
AX ( k )+ V ( k +1) , HA [ X ( k )
+ HV ( k +1)+ W ( k +1)
}
X ( k )]
=Cov
{
AX ( k ) , HA [ X ( k )
}
Search WWH ::




Custom Search