Hardware Reference
In-Depth Information
is exchanged producing a child. The value of each variable has the same probability
of coming from both parents.
The standard implementation of ES is available in modeFRONTIER from early
releases and it has also been implemented in Multicube Explorer. The described
enhancements were developed for the MULTICUBE project and were implemented
in modeFRONTIER.
3.3.2.3
Enhanced-MOPSO
Particle Swarm Optimization (PSO) is an optimization methodology that mimics
the movements of a flock of birds finding food [ 7 ]. PSO is based on a population of
particles moving through an hyper-dimensional search space. Each particle possesses
a position and a direction ; both variables are changed to emulate a well known social-
psychological phenomenon: mimic the success of other individuals in the population
(also called swarm ).
More formally, the position x for a single particle i is updated by means of a
velocity vector vecv by means of the following equation:
=
+
x i ( t )
x i ( t
1)
δ i ( t )
(3.2)
while the direction vector is updated with the following equation:
δ i ( t )
=
i ( t
1)
+
C 1 r 1 ( x pbest i
x i ( t
1))
+
C 2 r 2 ( x gbest
x i ( t
1))
where W is called the inertia weight, C 1 is the cognitive learning factor, C 2 is the
social learning factor, r 1 , r 2 are random numbers in the range [0, 1], x pbest i is the best
position of particle i with respect to the minimization problem, x gbest is the global
best found up to time t . The formulation of the problem leads to solutions which
try to 'follow' the leader's x gbest position as well as attracting solutions versus the
personal best solution of the particle x pbest i .
Dealing with Multi-objective problems. So far, several approaches have been
proposed for extending the formulation of the PSO technique to the multi-objective
domain [ 2 , 13 ]. The Enhanced-MOPSO technique is based on an “aggregating”
approach where the swarm is equally partitioned in n subswarms, each of which
uses a different cost-function which is the product of the objectives combined with
a set of exponents randomly chosen.
In other words, given the original set of objectives
{
f 1 ...f m }
, each sub-swarm i
solves the following problem:
f p i , j
j
min
x
( x )
(3.3)
X
j = 1 ...m
where p i , j is a set of randomly chosen exponents. It can be shown that solutions
to Problem 3.3 lie on the Pareto surface of the original problem. This approach
Search WWH ::




Custom Search