Information Technology Reference
In-Depth Information
+Cov[ V ( k +1) , HV ( k +1)+ W ( k +1)]
= A Cov[ X ( k ) , X ( k )
X ( k )] A T H T +Var[ V ( k +1)] H T .
X ( k ) with X ( k )
X ( k ), one gets
Yet, from the correlation of
X ( k )] = Var[ X ( k )
X ( k )] = P k .
Cov[ X ( k ) , X ( k )
Therefore,
HA X ( k )
Cov[ Y ( k +1)
HBu ( k ) , X ( k +1)]
=( AP k A T + Q ) H T = P k +1 H T .
Finally, we obtain,
K k +1 = P k +1 H T [ HP k +1 H T + R ] 1 .
In order to iterate the algorithm, which is recursive, let us compute finally
the covariance matrix of the estimation error at time k + 1. From its value,
X ( k +1)= A [ X ( k )
X ( k )] + V ( k +1)
X ( k +1)
K k +1 [ Y ( k +1)
HA X ( k )
HBu ( k )]
X ( k +1)=( I
X ( k )] + V ( k +1)
X ( k +1)
K k +1 H )
{
A [ X ( k )
}
K k +1 W ( k +1) ,
the expression of the covariance matrix can be derived,
K k +1 H )( AP k A T + Q )( I
K k +1 H ) T + K k +1 RK T
P k +1 =( I
k +1 .
4.7.2 The Delay Distribution Is Crucial for Recurrent Network
Dynamics
In this chapter, we provided examples of recurrent neural networks. Most of
them were of the input-output type, i.e., they were built from a feedforward
neural network whose outputs are fed back to the input with a unit time delay.
Other recurrent network models have been shown in this chapter (Hopfield
and Elman models) and in Chap. 2 (“gray box” modeling taking into account
algebraic and differential equations from prior knowledge for the network ar-
chitecture).
Let us emphasize that for a recurrent neural networks, delay distribution
has to be specified. If it is neglected, the network behavior is not properly de-
fined. To illustrate this, Fig. 4.21 shows a comparison of the delay specification
for a network without any closed circuit in the connection graph (feed-forward
network) and a recurrent network with circuits in the connection graph.
Pictures (a) and (b) show the graph of an elementary feedforward net-
work with four connections. In pictures (c) and (d), feedback was added. The
Search WWH ::




Custom Search