Information Technology Reference
In-Depth Information
x Constructive approach , an approach that starts with a minimal network
architecture and continues by its stepwise growth through adding new
neurons and new interconnection links between the neurons, under
permanent evaluation of network performance, until the optimal network
structure has been achieved.
x Destructive approach , which starts with a “large enough” architecture and
continues by its stepwise reduction of size through removal of some
individual neurons and the related links between them, under continuous
evaluation of network performance, until the optimal network structure has
been achieved.
Both approaches, however consider, through incremental changes of network
structure, only a limited (neighbouring) topological space, instead of considering
the entire search space of possible network structures. This deficiency definitely
restricts the overall possible optimal network structure that could be developed.
In the last decade or so, a way out of network development, by arbitrarily
adding and deleting of neurons and connecting weights, has been found in using
some more systematic evolutionary approaches. During this period of time,
researchers have succeeded in elaborating evolutionary methods capable of
covering most of the basic requirements in developing, training, and application of
neural networks. Using the new methods the following network evolving issues
have been supported:
x evolving optimal interconnection weights
x evolving global network architecture
x evolving pure network architecture
x evolving activation function
x evolutionary network training.
This is the main subject of the paragraphs that follow.
8.1.1.1 Evolving Connection Weights
Traditionally, optimal values of interconnection weights have from the very
beginning been determined through network training, usually by using a gradient-
based parameter-tuning algorithm, like the backpropagation algorithm. Yet, the
substantial risk of all gradient-based algorithms to be trapped in a local minimum
was a good enough reason to avoid their use in optimization problems and to look
for gradient-free search algorithms.
Jurick (1988) suggested that the network training process to be understood -
within the frame of the given network architecture and the objectives of learning
task - as an evolutionary process through which the optimal values of connection
weights can be determined. Montana and Davis (1989) decided to take the genetic
algorithms, instead of backpropagation algorithm, in searching the optimal weights
values. Using the new search strategy, they were able to find the global optimal
values of connection weights, without gradient implications. The results achieved
have been confirmed by Kitano (1990), who also accelerated the network training
convergence using an improved version of the genetic approach.
Search WWH ::




Custom Search