Information Technology Reference
In-Depth Information
based on the experience of the whole population. Examples of population-based meta-
heuristics are Genetic Algorithms which are an instance of Evolutionary Algortihms,
Ant Colony Optimization and Particle Swarm Optimization. Different kinds of meta-
heuristics exhibit varying performance on a specific kinds of problem types. They differ
w.r.t. the optimization speed and the solution quality. A metaheuristic's performance is
based on their configuration. Finding a good parameter set is a non-trivial task and often
based on a priori knowledge about the objective function and the problem. Setting up a
metaheuristic with standard parameter sets lets the optimization find a decent solution
but using a parameter set which is adapted to the specific objective function might even
lead to better results. In this paper we focus on PSO and try to find features character-
izing the objective function in order to select an adequate parameter configuration for
this metaheuristic. The optimization behavior of the particles is based on the objective
function and we try identify relevant information about the function. In the following
section we give a brief introduction to particle swarm optimization.
2.1
Particle Swarm Optimization
Particle Swarm Optimization (PSO) is inspired by the social behavior of flocks of birds
and shoals of fish. A number of simple entities, the particles, are placed in the domain of
definition of some function or problem. The fitness (the value of the objective function)
of each particle is evaluated at its current location. The movement of each particle is de-
termined by its own fitness and the fitness of particles in its neighborhood in the swarm.
PSO was first introduced in [8]. The results of one decade of research and improve-
ments to the field of PSO were recently summarized in [3], recommending standards
for comparing different PSO methods. Our definition is based on [3]. We aim at contin-
uous optimization problems in a search space
defined over the finite set of continuous
decision variables X 1 ,X 2 ,...,X n . Given the set Ω of conditions to the decision vari-
ables and the objective function f :
S
S→
￿
(also called fitness function) the goal is
to determine an element s ∈S
that satisfies Ω and for which f ( s )
f ( s ) ,
s
∈S
holds. f ( s ) is called a global optimum.
Given a fitness function f and a search space
the standard PSO initializes a set
of particles, the swarm. In a D -dimensional search space
S
each particle P i consists of
three D -dimensional vectors: its position # x i =( x i 1 ,x i 2 ,...,x iD ) , the best position
the particle visited in the past # p i =( p i 1 ,p i 2 ,...,p iD ) (particle best) and a velocity
# v i =( v i 1 ,v i 2 ,...,v iD ) . Usually the position is initialized uniformly distributed over
S
S
. The move-
ment of each particle takes place in discrete steps using an update function. In order to
calculate the update of a particle we need a supplementary vector # g =( g 1 ,g 2 ,...,g D )
(global best), the best position of a particle in its neighborhood. The update function,
called inertia weight, consists of two parts. The new velocity of a particle P i is calcu-
lated for each dimension d =1 , 2 ,...,D :
and the velocity is also uniformly distributed depending on the size of
S
v new
id = w
·
v id + c 1 1 d ( p id
x id )+ c 2 2 d ( g d
x id )
(1)
then the position is updated: x new
id
= x id + v new
id . The new velocity depends on the
global best ( g d ), particle best ( p id ) and the old velocity ( v id ) which is weighted by the
 
Search WWH ::




Custom Search