Information Technology Reference
In-Depth Information
exponential functions, logistic sigmoid functions, etc. Furthermore,
different activation functions can be applied in different layers of the
same network. Activation functions in neural networks should be
differentiable. The activation function of the input layer neurons is
usually linear, since it should preferably output initial (scaled) values
without any change.
Organization of an ANN is usually referred to as its topology or
architecture. In a typical ANN, network connections are of the feed-
forward type, which means that once the signal is transferred to
subsequent neurons in the network, it is not sent back to neurons in the
previous layer(s). Feed-forward architecture does not have a connection
back from the output to the input neurons and therefore does not keep a
record of its previous output values (Agatonovic-Kustrin and Beresford,
2000). The feedback type of the network connections is somewhat more
complex, since it refers to formation of cycles in connections where
information is sent back and elaborated in time. Each neuron has one
additional weight as an input, which will allow recurrence of the signal.
These kinds of networks are also called recurrent or dynamic.
Furthermore, neurons in a network can be fully or partially connected.
If a neuron in the layer is connected to all neurons in the subsequent
layer, then there is a full connection. Partial connection of a neuron is
characterized by it not being connected to all neurons in the subsequent
layer. Similar to biological neurons, artifi cial neurons can also receive
either excitatory or inhibitory inputs. Inhibition of neurons occurs in the
case of competitive learning, when the network wants to choose the
highest probability and inhibit all the others neutrons (Agatonovic-
Kustrin and Beresford, 2000).
Training of an ANN is done by adjustment of weight values to obtain
the best nonlinear relationship between parameters used as inputs and
outputs of the network. At the beginning of the training process, weights
between the neurons have random values. During the training phase,
input/output data pairs are presented to the network, and the network
searches for the input-output relationships. One cycle of input-output
presentation to the network is called iteration (epoch). Process of the
network training can be thought of as a search for the optimal weight
values that successfully convert inputs to outputs through sometimes
numerous iterations. This process is often called convergence (Sun et al.,
2003). During the process of the weights adjustment (i.e. network
training), some of the interconnections are strengthened and some are
weakened, so that a neural network will output a more correct answer
(Agatonovic-Kustrin and Beresford, 2000). Once the optimal set of
￿
￿
￿
 
Search WWH ::




Custom Search