Information Technology Reference
In-Depth Information
Ta b l e 3 . Confusion matrix of the cross validation
classified as
Accuracy Precision
class
Ack. Grie. G.S./R. Rosen. Schwe. Sphe.
percent
percent
Ackley
299
1
99.7
99.7
Griewank
116
1
183
38.7
48.1
Gen.Schwe./Rast.
1
1
583
13
2
97.2
97.2
Rosenbrock
15
283
1
1
94.3
95.3
Schwefel
1
299
99.7
99.0
Sphere
123
1
176
58.7
48.9
4.4
Computing Effort
Computing the features is based on the evaluated fitness value of specific positions
in the search space. We restrict this calculation to 3000 which means one percent of
the whole optimization process in our setting. To be comparable to the benchmark of
Bratton and Kennedy we run the optimization for the specific parameter configuration
for 9900 iterations leading to only 297000 fitness computations. We compare our results
of the optimization with 10000 iterations to the optimization with 9900 iterations and
get quite the same results as shown in table 2 (specific vs. specific ). The comparison
shows minor differences in the magnitude of one percent.
5
Discussion and Future Work
In this paper we describe an approach to training a classifier which uses function fea-
tures in order to select a better parameter configuration for Particle Swarm Optimiza-
tion. We show how we compute the features for specific functions and describe how
we get the classes of parameter sets. We include the trained classifier and evaluate the
parameter configuration against a Particle Swarm Optimization with standard configu-
ration. Our experiments demonstrate that we are able to classify different functions on
basis of a few fitness evaluations and get a parameter set which leads the PSO to a sig-
nificantly better optimization performance in comparison to a standard parameter set.
Statistical tests (t-Tests with α =0 . 05 ) indicate better results for the functions where
the global optimum has not been found in both settings.
The next steps are to involve all possible configurations of the PSO for example the
swarm size or the neighborhood topology. These parameters are not involved in our
approach because we based this work on the benchmark approach of [3]. The behavior
of the swarm changes significantly if another neighborhood is chosen. To increase the
size of the swarm is another task we will focus in future. Depending on the swarm size
different parameter sets leads to the best optimization process. An idea is to create an
abstract class of parameter sets which include different sets of predefined swarm sizes.
In order to get more information about the performance of our approach it would be
interesting to allocate a fixed percentage of the whole evaluations for feature computa-
tion (e.g., 1%). In this case it would be interesting to examine the quality of the result
if not all feature or features of minor quality were computed.
 
Search WWH ::




Custom Search