Information Technology Reference
In-Depth Information
Schematic representation of an artifi cial neuron that
receives inputs x i of varying weights w i , and after
summation applies activation function that produces
its output y k
Figure 5.1
Connections between neurons, corresponding to biological synapses,
are called weights and are especially important for learning and other
adaptation processes that occur in neural networks (Figure 5.2).
According to biological principles, a neuron in the ANN receives one or
more inputs and transmits one output that is copied and forwarded to
further units. The process of training an ANN corresponds to the learning
process in the human brain - the network draws conclusions of future
outcomes based on previous experience.
Neurons (units) in the artifi cial network are organized into layers. A
basic ANN consists of an input, hidden, and output layer of neurons.
Increasing the number of hidden layers and changing the way that
neurons are interconnected can lead to greater complexity of neural
networks. Input and output layers in the network serve to represent the
data that is fed into the network and received as the network result,
respectively, whereas the units in hidden layers apply functions and
perform calculations on the presented data.
It has already been accentuated that ANNs are nonlinear computational
techniques. Prior to analysis, data are scaled (most software tools use 0
to 1 scaling of the data), in order for different input parameters to be
comparable according to their infl uence on the outputs. Signals from
input and hidden layers are transferred to hidden and output layers that
are subsequently positioned. Neurons in the hidden layer apply certain
activation function to the transmitted signal that they receive, and most
commonly it is sigmoid function. In each neuron, input signals are
￿
￿
￿
Search WWH ::




Custom Search