Information Technology Reference
In-Depth Information
population) though real-valued encoding and representations are more often used in
modern applications. The initial population then evolves by generating new-
generation individuals via crossover of two randomly selected parent strings, and/or
the mutation of some random bits. Whether a new individual is selected or not is
based on its fitness, which is linked in some way to the objective function.
The advantages of the genetic algorithms over traditional optimization algorithms
are the ability of dealing with complex problems and parallelism. GA search methods
can deal with various types of optimization, whether the objective functions are sta-
tionary or non-stationary (time-dependent), linear or nonlinear, continuous or discon-
tinuous, or even with random noise. As individuals in a population act like independent
agents, each subset of the population can explore the search space in many directions
simultaneously. This feature makes it ideal for the parallel implementation of the ge-
netic algorithms.
3.2.2 Simulated Annealing
Simulated annealing is probably the best example of modern metaheuristic algo-
rithms. It was first developed by Kirkpatrick et al. in 1983 [10], inspired by the an-
nealing process of metals during heat treatment and also by Metropolis algorithms for
Monte Carlo simulations. The basic idea of the SA algorithm is similar to dropping a
bouncing ball over a landscape. As the ball bounces and looses energy, it will settle
down at certain local minimum. If the ball is allowed to bounce enough time and loose
energy slowly enough, the ball may eventually fall into the globally lowest location, and
hence the minimum will be reached. Of course, we can use not only single ball (stan-
dard simulated annealing) but also multiple balls (parallel simulated annealing).
The optimization process typically starts from an initial guess with higher energy.
It then moves to another location randomly with slightly reduced energy. The move is
accepted if the new state has lower energy and the solution improves with a better ob-
jective or lower value of the objective function for minimization. However, even if
the solution is not improved, it is still accepted with a probability of
δ
E
p
=
exp[
],
(8)
kT
which is a Boltzmann-type probability distribution. Here, T is the temperature of the
system and k is the Boltzmann constant which can be set into one for simplicity. The
energy difference δE is often related to the objective function f( x ) to be optimized.
The trajectory in the simulated annealing is a piecewise path, and this is virtually a
Markov chain because the new state (new solution) depends solely on current
state/solution and transition probability p.
Here the diversification via randomization produces new solutions (locations).
Whether the new solution is accepted or not is determined by the probability. If T is
too low ( T →0), then any δ E >0 (worse solution) will rarely be accepted as p →0, and
the diversity of the solutions is subsequently limited. On the other hand, if T is too
high, the system is at a high-energy state and most new changes will be accepted.
However, the minima are not easily reached. Thus, the temperature T is an essential
controlling parameter for the balance of diversification and intensification.
Search WWH ::




Custom Search