Databases Reference
In-Depth Information
Algorithm 3.3. Evolution of architecture
(1) Decode each individual into an architecture.
(2) Train each ANN with the decoded architecture by the learning rule
starting from different sets of random initial connection weights and
learning rule parameters.
(3) Compute the fitness of each individual according to the above training
result and other performance criteria such as the complexity of the
architecture.
(4) Select parents from the population based on their fitness.
(5) Apply search operators to the parents and generate offspring which form
the next generation.
3.2.3. The evolution of node transfer function
The transfer function is often assumed to be the same for all the nodes in
an ANN, at least for all the nodes in the same layer and is predefined by the
human experts. But the transfer function for each node may be different
and has significant impact on ANNs performance. 80-82
Stork et al. 83 first applied EAs to the evolution of both topological
structures and node transfer functions. The transfer function was specified
in the structural genes in their genotypic representation. A simpler approach
for evolving both topological structures and node transfer functions was
adopted by White and Ligomenides, 72 i.e., in the initial population, 80%
nodes in the ANN used the sigmoid transfer function and 20% nodes
used the Gaussian transfer function. The optimal mixture between these
two transfer functions evolved automatically, but parameters of the two
functions did not evolve.
Liu and Yao 77 used EP to evolve ANNs with both sigmoidal and
Gaussian nodes, where the growth and shrinking of the whole ANN is
done by adding or deleting a node. Hwang et al. 79 evolved ANN topology,
node transfer function, as well as connection weights for projection neural
networks.
3.2.4. Evolution of learning rules
An ANN training algorithm/learning rules used to adjust connection
weights depends on the type of architectures under investigation. Different
Hebbian learning rules proposed to deal with different architectures, but
designing an optimal learning rule becomes very dicult with no prior
Search WWH ::




Custom Search