Databases Reference
In-Depth Information
study, we change the inertia weight at every generation via the following
formula:
w = w 0 + r ( w 1
w 0 ) ,
(4.10)
where w 0
[0 , 1], w 1 >w 0 are positive constants, and r is a random number
uniformly distributed in [0 , 1]. The suggested range for w 0 is [0, 0.5], which
makes the weight w randomly vary between w 0 and w 1 .Inthisway,we
can obtain a uniformly distributed random weight combination, which is
generated at every iteration. The idea here is to use dynamic weights instead
of fixed weights to obtain the Pareto solutions.
The third term v m ( t ) in Equation (4.8) is a mutation operator, which is
set proportionally to the maximum allowable velocity V max . If the historic
optimal position, p i , of the particle swarm is not improving with the
increasing number of generations, this may indicate that the whole swarm
is becoming trapped in a local optimum from which it becomes impossible
to escape. Because the global best individual attracts all particles of the
swarm, it is possible to lead the swarm away from a current location by
mutating a single individual. To this end, a particle is selected randomly
and then a random perturbation (mutation step size) is added to a randomly
selected modulus of the velocity vector of that particle by a mutation
probability. The mutation term is produced as follows:
v m ( t )= sign (2 rand
1) βV max .
(4.11)
where β
[0 , 1] is a constant, rand is a random number uniformly
distributed in [0 , 1], and the sign function is defined as sign ( x )=1if
x
1if x< 0, which is used to decide the particle's
moving direction. It is noted that the mutation rate in this algorithm is not
decreased during a run. On the contrary, the mutation effect is enhanced
at the late stages of search. This special mutation operator can encourage
particles to move away from a local optimum and maintain the diversity of
the population.
In order to evaluate the performance of individual particles, an
appropriate evaluation function should be defined to select local best and
global best. We simply use a weighted aggregation approach to construct
the evaluation function F for multi-objective optimization:
0and sign ( x )=
m
m
F =
w i f i ;
w i =1 ,
(4.12)
i =1
i =1
where m is the number of objectives, i =1 , 2 ,...,m .
Search WWH ::




Custom Search