Information Technology Reference
In-Depth Information
Minimize ( f 1 ( x ) ,f 2 ( x ) ,....f M ( x )) ,
Subject to
g j ( x )
0 ,
j =1 , 2 ,...,J,
h k ( x )=0 ,
k =1 , 2 ,...,K,
(1)
x i
x i ,
x i
i =1 , 2 ,...,n,
where n is the number of variables (dimension of the problem), J is the number
of inequality constraints, K is the number of equality constraints, x i is the lower
bound of variable i and x i is the upper bound of variable i . The only mandatory
constraints for the algorithm are the bounds of the search space ( x i and x i ).
For the problem given in Formulation 1, n -variable solution vectors that sat-
isfy all constraints are called feasible solutions. These solutions will be optimal
if they individually satisfy a number of Karush-Kuhn-Tucker optimality condi-
tions, which involves finding the gradients of objective and constraint functions
[1].
When we have a single objective f , the optimal solutions correspond to the
points that have the smallest values of f , considering the whole search space (in
a minimization problem). However, for several objective functions, the notion
of “optimal” solution changes, because the aim now is to find good trade-offs
among the objective functions. In this case, the most commonly adopted notion
of optimality is the one associated with the Pareto front .Asolution x belongs
to the Pareto front if there is no other feasible solution x capable of minimizing
an objective without simultaneously increasing at least one of the others.
Other important concepts that will be frequently used in this work are Pareto
dominance and Pareto optimal set .Forthe Pareto dominance , a vector u =
( u 1 ,...,u k ) is said to dominate a vector v =( v 1 ,...,v k ) (denoted by u
v )if
and only if
: u i <v i .
The Pareto optimal set for a multi-objective optimization problem f ( x ) is
given by :=
i
∈{
1 ,...,k
}
,u i
v i and
i
∈{
1 ,...,k
}
x
Ω f ( x )
,where Ω is the domain of x .
Therefore, the Pareto front ( F )foragiven f ( x ) and is defined as F :=
{
x
Ω
|¬∃
f ( x )
}
}
{
u = f =( f 1 ( x ) ,...,f M ( x ))
|
x
.
4
The omni-aiNet Algorithm
The omni-aiNet algorithm works with a real-coded population of antibodies
that correspond to the candidate solutions for the optimization problem. The
concept of a population of antigens is not explicitly used, once only the anity
measures (value of the objective functions being optimized) are available. The
omni-aiNet basically follows the same main steps of the opt-aiNet algorithm
[5], as can be seen in Figure 1. However, the essential aspects of each step are
different. Additionally, to increase the convergence capability of the algorithm,
it was added a variation mechanism known as Gene Duplication , which will be
described in Section 4.3.
The algorithm starts by randomly generating an initial population of size N i
( N i is defined by the user). Each individual generated is within the range of
the variables. After the creation of the initial population, the algorithm enters a
 
Search WWH ::




Custom Search