Information Technology Reference
In-Depth Information
Fig. 1.7.
The canonical form (
right-hand side
) of the network shown on Fig. 1.5
(
left-hand side
). That network has a single state variable
x
(
kT
) (output of neuron 3):
it is a first-order network. The gray part of the canonical form is a feedforward neural
network
y
3
(
kT
)
,u
1
(
kT
)
,y
4
[(
k
1)
T
]; therefore, its output is
g
(
kT
), which is the output
of the network. Hence, both networks are functionally equivalent.
Recurrent neural networks (and their canonical form) will be investigated
in detail in Chaps. 2, 4 and 8.
−
1.1.1.5 Summary
In the present section, we stated the basic definitions that are relevant to the
neural networks investigated in the present topic. We made specific distinc-
tions between:
•
Feedforward (or static) neural networks, which implement nonlinear func-
tions of their inputs,
•
Recurrent (or dynamic) neural networks, which are governed by nonlinear
discrete-time recurrent equations.
In addition, we showed that any recurrent neural network can be cast into a
canonical form, which is made of a feedforward neural network whose outputs
are fed back to its inputs with a unit time delay.
Thus, the basic element of any neural network is a feedforward neural
network. Therefore, we will first study in detail feedforward neural networks.
Before investigating their properties and applications, we will consider the
concept of training.
Search WWH ::
Custom Search