Information Technology Reference
In-Depth Information
An iteration of the covariance and Kalman gain updates is
P k +1 = A ( k ) P k A ( k ) T + Q ( k +1)
K k +1 = P k +1 H ( k +1) T [ H ( k +1) P k +1 H ( k +1) T + R ( k +1)] 1
P k +1 =[ I
K k +1 H ( k +1)][ A ( k ) P k A ( k ) T + Q ( k +1)]
K k +1 H ( k +1)] T + K k +1 R ( k +1) K T
×
[ I
k +1 .
The innovation sequence is uncorrelated as before. Conversely, there is no
steady regime, and the stability of the filter is no longer guaranteed.
We just give here the principle of the algorithm. In practical cases, one can
face problems if the dimension of the state space is too large. Such problems
occur if the computation is too expensive, if the covariance matrix inversion
fails, or if the positivity constraint of the covariance matrix is violated. Some
special care allows overcoming these di culties. For more details see [Ander-
son 1979; Haykin 1996].
4.4.3 Extension of the Kalman Filter
4.4.3.1 Case of Nonlinear Systems
Filtering nonlinear dynamic systems is a di cult issue. It is a field of active
research. Neural networks are one of the tools that allow performing that
task. For an introduction to nonlinear filtering which is both rigorous and
application-oriented, one may consult the old textbook [Jazwinsky 1970]. It is
nice and clear but it does not deal with numerical filtering. The paper [Levin
1997] gives a much shorter introduction. Moreover, it is written to introduce
neural filtering. We will not address the general subject of nonlinear filter-
ing here. Specifically, we will not address the observability problems, which
deserve a special development.
The scope of this section is just to the presentation of a convenient formal
framework for extended Kalman filtering, which is a common technique. That
technique will be used below for the training of neural networks. Consider
a time-invariant, nonlinear, controlled dynamical system with additive state
noise and measurement noise. Its state equation is
X ( k +1)= f [ X ( k ) , u ( k )] + V ( k +1)
and its measurement equation is
Y ( k )= h [ X ( k )] + W ( k ) ,
where the covariance matrices of gaussian white noise are denoted by Q ( x )
and R ( x ), for the state noise and the measurement noise respectively. That
means that the noise laws are defined as conditional gaussian probability
distributions given the current state. That is a Markov model.
Search WWH ::




Custom Search