Information Technology Reference
In-Depth Information
inertia weight w . The parameters c 1 and c 2 provide the possibility to determine how
strong a particle is attracted by the global and the particle best. The random vectors 1
and 2 are uniformly distributed over [0 , 1) D and produce the random movements of
the swarm.
2.2
Algorithm Configuration Problem
The general problem of configuring algorithms (algorithm configuration problem) is
defined by Hutter et al. [7] as finding the best tuple θ out of all possible configurations
Θ ( θ ∈ Θ ). θ represents a tuple with a concrete assignment of values for the parameter
of an algorithm. Applied to metaheurisitcs the configuration of the algorithm parame-
ters for a specific problem influences the behavior of the optimization process. Different
parameter settings exhibit different performances at solving a problem. The problem to
configure metaheuristics is a super ordinate problem and is analyzed for different kinds
of metaheuristics. In PSO the convergence of the optimization depending on different
parameter settings and different functions are analyzed by [14], [12] and [1]. But these
approaches focus only on the convergence of the PSO but not on function characteristics
and the relationship between the parameter configuration and the function landscape.
Different approaches to solve this algorithm configuration problem on metaheurisitcs
are introduced: One approach is to find sets of adequate parameters which performs a
good optimization on most different types of objective functions. This “standard param-
eters” are evaluated on a preset of functions to find a parameter set which leads to global
good behavior of the metaheuristic. In PSO standard parameter sets are presented by [4]
and [12]. Some approaches do not present a preset of parameters but change the values
of the parameters during the runtime to get a better performance [10].
Another approach is introduced by Leyton-Brown et al. They try to create features
which describe the underlying problem [9] and generate a model predicting the right
parameters depending on the classification. They introduce several features which are
grouped into nine groups. The features include, among others, problem size statistics,
e.g. number of clauses and variables, and measures based on different graphical rep-
resentations. This analysis is based on discrete search spaces because on continuous
search spaces it is not possible to set adequate discrete values for the parameter config-
uration which is needed by their approach.
Our problem is to configure an algorithm working on continuous search spaces and
offers infinite possibilities of parameter sets. To solve this challenge we try, similar to
Leyton-Brown et al., to train a classifier with features of the fitness function landscape
computed by observing swarm behavior. These features are computed and combined
with the best found parameter set on the function to a training instance (see figure 1).
With a trained classifier at hands we compute the features of the objective function
prior to the start of the optimization process. The classifier - in our case a decision
tree - classifies the function and selects the specific parameter set that is expected to
perform better in the optimization process than using the standard parameters. In our
first experiments, which we understand as proof of concept, we choose only a few func-
tions which do not represent any specific types of function. We want to show that our
technique is able to identify functions based on the swarm behavior provided features
and thereby, select the specific parameter configuration. In order to learn the classifier
 
Search WWH ::




Custom Search