Information Technology Reference
In-Depth Information
or may change slowly. Several methods have been proposed to estimate on-
line both the current state X (k) and the parameter θ .Inthemethodof
extended Kalman filter, the parameter θ is incorporated into the state. Then
the extended model equations become
X ( k +1)= A [ θ ( k )] X ( k )+ B ( k ) u ( k )+ V 1 ( k +1)
θ ( k +1)= θ ( k )+ V 2 ( k +1)
Y ( k )= H [ θ ( k )] X ( k )+ W ( k ) .
The state noise [ V 2 ( k )], which allows the parameter to vary, is artificial for
a steady model: nevertheless, it improves the operation of the filter because it
helps to stabilize the algorithm [Haykin 1999]. Independence and stationarity
of [ V 1 ( k )], and of [ V 2 ( k )] are assumed here for the sake of simplicity. Those
assumptions must sometimes be dropped. From the previous paragraph, ap-
plication of linearization techniques for extended Kalman filter provides the
following equations:
X ( k +1)= A [ θ ( k )] X ( k )+ B ( k ) u ( k )+ K 1 ,k +1 ϑ ( k +1)
θ ( k +1)= θ ( k )+ K 2 ,k +1 ϑ ( k +1) ,
with the same notation for innovation as in the linear case,
H [ θ ( k )]
A [ θ ( k )] X ( k )+ B ( k ) u ( k )
ϑ ( k +1)= Y ( k +1)
{
}
.
Note that the parameter and state estimates are simultaneously updated
using the same innovation and different Kalman gains. Iterations of computa-
tion for covariance updates and Kalman gain computations are derived from
the previous section.
Though its computer implementation is quite simple when the state di-
mension is not too large, using the extended Kalman filter to estimate both
parameter and state presents major drawbacks: stability issue, and depen-
dence on initialization. Therefore, whenever possible, trickier methods are
preferred because they are more reliable. Such methods generally combine
Kalman filtering techniques for state estimation and Bayesian techniques or
maximum likelihood estimation techniques for parameter estimation.
4.4.3.3 Adaptive Training of Neural Networks Using Kalman
Filtering
Figure 4.13 provides the diagram of a neural network training algorithm using
the extended Kalman filter.
The system under estimation is the neural network itself. It is supposed to
be a model of the process that generated the training set. Actually, its state
is the configuration of the network (i.e., the set of all the parameters of the
Search WWH ::




Custom Search