Information Technology Reference
In-Depth Information
x 1 ( k +1)= ψ 1 [ x 1 ( k ) ,x 1 ( k
1) ,x 2 ( k
1) ,x 3 ( k
1) ,u ( k
1)] ,
x 2 ( k +1)= ψ 2 [ x 1 ( k +1) ,x 3 ( k +1)] ,
x 3 ( k +1)= ψ 3 [ x 3 ( k ) ,x 3 ( k
1) ,x 1 ( k
1) ,x 2 ( k ) ,x 2 ( k
1)]
y ( k +1)= x 3 ( k +1) .
The explicit Euler discretization method consists in approximating the
time derivative of a function f ( t )attime kT (where T is the sampling period,
or integration step, and k is a positive integer) by
{
f [( k +1) T ]
f ( kT )
}
/T
The question of the discretization of continuous-time differential equations
is discussed in more detail in the section devoted to semiphysical modeling.
Clearly, the above equations are not in canonical form. For a clear analysis
of the model, and for training it if the functions are parameterized, it is very
desirable to know the minimum number of variables that are necessary for
a complete description of the model, and to put it in canonical form. Note
that a given recurrent neural network does not have a unique canonical form:
generally, several different canonical forms can be derived; obviously, they all
have the same number of state variables.
The network graph is useful for deriving the canonical form. Its nodes are
the neurons, and its edges are the connections between neurons; each edge is
assigned a length, which is the delay (possibly equal to zero), expressed as an
integer multiple of the sampling time, and a direction, which is the direction
of information flow in the edge. The length of a path in the graph is the sum
of the lengths of the edges that belong to the path.
A cycle in a graph is a path that starts and ends at the same node, with-
out going through the same node more than once, and complying with the
directions of the edges. The length of a cycle is the sum of the lengths of its
edges.
For a discrete-time neural network to be causal, its graph must have no
cycle of length equal to zero. If a cycle had a length equal to zero, the value of
the output of a neuron would be dependent on the value of the same output
at the same time step .
Figure 2.49 shows a representation of the equations of the model as the
graph of a recurrent neural network; nodes 1, 2 and 3 represent neurons whose
activation functions are Ψ 1 , Ψ 2 and Ψ 3 , respectively. The figures in squares are
the delays associated to each connection (number of sampling periods).
Vector z ( k )=[ x 1 ( k ) ,x 2 ( k
1)] T can be chosen as a state
vector. The corresponding canonical form is shown on Fig. 2.49. It has a
feedforward neural network with three hidden neurons (neuron 1, and neuron
2 which is duplicated in the canonical form with shared weights, and output
neuron (neuron 3) which is also a state neuron. Since the order of the model
is 4, there are 4 state outputs, which are connected back to the state inputs
through unit delays, denoted by the conventional delay operator symbol q 1 .
1) ,x 3 ( k ) ,x 3 ( k
Search WWH ::




Custom Search