Information Technology Reference
In-Depth Information
In order to obtain relevant and reliable results by using ANN models,
it is recommended that the number of experimental runs is 10 times
greater in comparison to the number of inputs; or, if this is not
feasible, at least 2 to 3 times the number of inputs (Sun et al., 2003). It is,
therefore, practical to fi rst conduct screening experimental design
(Section 3.2.1), in order to select the most signifi cant input variables
that infl uence output properties, since this leads to a reduction in the
number of the ANN inputs (and number of the examples needed for the
network training at the same time).
There are many possibilities to optimize an ANN. One of the possible
approaches includes application of GAs. GAs (described in more detail in
Section 5.4) provide an intelligent exploitation of a random search within
a defi ned search space to discover good solutions rapidly for diffi cult
high-dimensional problems (Sun et al., 2003). GAs can be employed in
two ways: to optimize the ANN topology and/or to perform optimization
of input parameters in order to obtain desired output parameters.
Other optimizers that can be used include Monte Carlo simulations,
swarm optimizer, simulated annealing, etc.
Unsupervised learning
Contrary to the supervised learning process, where both input and output
data were presented to the network, the unsupervised learning process is
characterized by the whole data set being presented to the network,
which then searches for relationships among the data. This approach is
mainly used for classifi cation purposes. Unsupervised learning is based
only on the internal structure of the network. During the unsupervised
learning process, the neurons may compete, cooperate, or both. In
competitive learning, neurons are grouped in such a way that if one
neuron responds more strongly to a particular input it suppresses or
inhibits the output of the other neurons in the group. In cooperative
learning, the neurons within each group work together to reinforce their
output (Agatonovic-Kustrin and Beresford, 2000). The unsupervised
learning method has been adopted in the Hopfi eld-type and Boltzmann-
machine network structures (Ichikawa, 2003).
Since the main task for unsupervised learning networks is classifi cation,
they are based on the data clustering methods. All clustering techniques
can be roughly classifi ed into two key categories: hierarchical , which
partition the data by successively applying the same procedure to clusters
formed during previous iterations; or non-hierarchical , which determine
￿
￿
￿
 
Search WWH ::




Custom Search