Information Technology Reference
In-Depth Information
Artificial Neural Networks (ANNs). Artificial Neural Networks (ANN) are
non-linear statistical modeling tools able to model complex relationships between
inputs and outputs by means of a training procedure during which they adapt
themselves to the data. They consist of massively parallel interconnected and
adaptive processing elements. From this perspective they are very attractive in
olfactory signal analysis, since, to some extent, they mimic the olfactory system:
the processing elements represent the biological olfactory cells or neurons, while
their interconnections correspond to the synaptic links. In ANN, the processing
elements are organized in three distinct groups of elements: input, hidden and
output layer neurons. The input layer corresponds to the input data (the feature
matrix), while the output neurons correspond to each of the considered classes;
the hidden layers have computational tasks, and the number of hidden layers,
as well as the number of neurons in each layer must be determined experimen-
tally. Each neuron sums its weighted inputs and performs a nonlinear transform
through to the output layer. The learning process in ANNs starts providing them
with a number of sample inputs with their corresponding outputs (supervised
learning). During the learning phase, the weights are adjusted to minimize the
difference between the obtained output and the actual one. Once the network
is trained, it can be used to predict the membership of new samples. It can be
demonstrated that an ANN, given a sucient number of sigmoidal neurons in
the hidden levels, is able to approximate any nonlinear function on a compact
set. Moreover ANNs asymptotically (with an infinite number of examples) ap-
proximate the a-posteriori probability as with the Bayesian classifiers [13]. The
issue of generalization must be addressed and early stopping and weight decay
can be interesting solutions [13]. One of the main drawbacks of ANN regards
the impossibility to decide a priori the best architecture and parameters to use,
which must be determined experimentally and will strongly affect the success
of the training process, in terms of a fast rate of convergence and good gen-
eralization. A possible solution, later discussed in details, consists of using a
Genetic Algorithm (GA) to determine automatically a suitable network archi-
tecture and the best set of parameters to be used. The interconnection topology
and learning rules of the neurons determine the type of a particular network
and its performance. In particular the most used ANN topology in electronic
noses are feed-forward neural networks, in which the information moves only in
a forward direction, from the input nodes, through the hidden nodes (if any)
and to the output nodes. There are no cycles or loops in the network. According
to the specific architecture, a number of feed-forward neural networks can be
outlined:
- Multy-Layer Perceptrons (MLP) : MLP are the most popular and simplest
type of feed-forward neural networks with one or more layers of neurons
between the input and output neurons. The hidden units are connected to
either the inputs or outputs by weighted connections. A MLP is able to learn
complex nonlinear regression by adjusting the weights in the network by a
gradient descent technique known as back-propagation. This is the model
used in this work; in particular, the implemented MLP has one hidden layer
 
Search WWH ::




Custom Search