Databases Reference
In-Depth Information
Fig. 3.1.
ANN architecture.
of the node. Usually, f i is nonlinear, such as sigmoid, Gaussian function,
etc. In general, the topological structure, i.e., the way the connections are
made between the nodes of different layers and the transfer functions used
determines the architecture of ANN. Learning in ANN can broadly be
classified into three groups; supervised, unsupervised, and reinforcement
learning. Supervised learning makes a direct comparison between the actual
output of an ANN and the desired/target output. Generally we formulate
it as the minimization of an error function such as the total mean square
error (MSE) between the actual output and the desired output summed
over all available data. To minimize the MSE, gradient descent-based
backpropagation (BP) 31 algorithm is used to adjust connection weights
in the ANN iteratively.
Reinforcement learning may be considered as a special case of
supervised learning where the exact desired output is unknown; rather
it is based on the information that the estimated output is correct or
not. Unsupervised learning is mostly based on the correlations among
input data; no information regarding correctness of the estimated output
is available for learning. The mechanism to update the connection weights
are known as the learning rule, examples of popular learning rules include
the delta rule, the Hebbian rule, the anti-Hebbian rule, and the competitive
learning rule. 32
The ideas and principles of natural evolution have been used to
develop the population based stochastic search evolution algorithms (EA),
which includes evolution strategies (ES), 33,34
evolutionary programming
(EP) 13,35,36
and genetic algorithms (GAs). 11,37
Population based search
Search WWH ::




Custom Search