Agriculture Reference
In-Depth Information
networks and parallel distributed process-
ing are also called ANNs.
ANNs, as defined by Schalkoff (1997),
are networks comprising a number of inter-
connected units, each unit having input
or  output characteristics that implement
a local computation or function. Also, they
are a functional abstraction of the biological
neural structures of the central nervous
system (Anderson, 1983; Akkurt et al ., 2003)
and they can exhibit a surprising number of
the characteristics of the human brain, for
example, learning from experience and gen-
eralizing from previous examples to solve
new problems (Oztas et al ., 2006).
An overview of ANN models has been
provided over the last years by various authors
(Rumelhart et al ., 1986a,b; Lippmann, 1987;
Fausett, 1994; Taylor, 1999) who have con-
ducted research involving the mathematical
description of ANN models and algorithm
training, such as supervised/unsupervised
learning. Literature on ANN models also
shows that the development and application
of ANNs is not limited to a specific area.
Among artificial neural structures, the
multilayer perceptron neural network is high-
lighted. The increasing number of research
applications of ANNs during the last few
years has motivated us to describe this model
with the aim of promoting its application in
animal science. Curve fitting, such as in the
regression and classification methods, was
used for prediction in time series.
X 1
w 1
w 2
w 3
X 2
s ( i )
f (.)
X 3
.
.
.
w n
v ( i ) = w i x i
X n
s ( i ) = f ( v ( i ))
Fig. 7.1. The first neuron - perceptron.
After several discouraging years ANNs
remerged in the 1980s with the progress
of  computer technology. Also, new critical
procedures were discovered allowing the
advancement of ANNs. These are now ac-
knowledged as a modern approach that can
be applied in all fields of knowledge and are
consequently the subject of intensive theor-
etical and applied development.
Multilayer Perceptron Neural Networks
The topology of a neural multilayer percep-
tron network consists of an input layer, hid-
den layers and an output layer. When the
error between the estimated and the actual
values does not satisfy a minimum accept-
able criterion, it is back-propagated and
distributed to the estimated values of the
parameters as many times as necessary until
the error is acceptable. An example of a multi-
layer perceptron network topology is shown
in Fig. 7.2 ; this consists of an initial layer
with four input variables, two hidden layers
with three neurons each and an output layer.
As a network training method, the back-
propagation algorithm (Rumelhart et al .,
1986a) can effectively train the network for
non-linear problems and this has stimulated
a torrent of research on and applications for
neural networks.
Much progress has been achieved in im-
proving the performance and understanding of
neural networks (Hopfield, 1982; Hinton and
Sejnowski, 1986; Lippmann, 1987; Peterson
and Anderson, 1987; Chen, 1991; Galan-Marin
and Perez, 2001; Manry et al ., 2001; Oh and
Pedrycz, 2002; Panchapakesan et al ., 2002).
The First Neuron - Perceptron
The first structure built in an attempt to
mimic the brain was described by McCulloch
and Pitts (1943) as a very simple artificial
neuron called perceptron (Fig. 7. 1 ). This
consisted of multiple inputs (dendrites) and
a single output (axon). Although the first
results seemed promising, perceptrons had
many limitations. Minsky and Papert (1969)
showed that a perceptron is able to learn to
differentiate only two linearly separable
classes. This simple but important structure
marked the birth of neural networks and
artificial intelligence.
 
 
Search WWH ::




Custom Search