Information Technology Reference
In-Depth Information
of a multiobjective optimization algorithm, and subsequent work of Clark et al.
[Clark et al. 2005] proved analogous results for the B-cell algorithm of Kelsey
and Timmis [Kelsey and Timmis 2003]. In each of these works, the respective
algorithms were described exactly in terms of Markov chains, and the theory of
the latter implied convergence to the optima with probability one, in the limit
when the number of iterations goes to infinity. There is a large amount of liter-
ature concerning the use of genetic algoritms (GAs) as function optimizers (see
e.g. [Dasgupta and McGregor 1992, De Jong 1992]), and in this setting there is
already a precedent for applying Markov chain methods [Vose 1995].
Although the convergence of optimization algorithms like the BCA is a nice
theoretical property, it is not immediately useful from a practical point of view:
one cannot wait for infinitely many iterations! The Markov chain theory applied
to the BCA (see [Clark et al. 2005]) describes the algorithm in terms of a matrix
of probabilities, known as the transition matrix. The transition matrix has 1
as its eigenvalue of largest modulus. In order to get a precise estimate of the
average rate of convergence of the algorithm, in terms of the so called mixing
time [Dyer et al. 2006, Hunter 2003], one would need to estimate the eigenvalue
of the transition matrix which is second largest in size. For some problems, such
as certain sampling algorithms considered by Jerrum [Jerrum 2005], the mixing
time can be estimated, but for the BCA the transition matrix (and hence its
second largest eigenvalue) is highly problem-dependent, and so it is not clear
that a good universal estimate can be obtained.
The aim of this paper is to get some empirical results on the performance of
the BCA applied to some specific problems, in order to see (on average) what
proportion of trials converge to a solution to these problems. In the original
paper by Kelsey and Timmis [Kelsey and Timmis 2003], the BCA was shown to
perform very well compared with GAs and hybrid GAs when these algorithms
were applied to a standard set of benchmark function optimization problems,
including the problem of finding a global minimum of the “Camelback” function
21 x 2 / 10 + x 4 / 3) x 2 + xy +(
4+4 y 2 ) y 2 .
f ( x, y )=(4
(1)
Kelsey and Timmis found that the BCA outperformed a certain hybrid GA in
the sense that it performed fewer function evaluations to get the same optimum
solutions. All the standard problems considered in [Kelsey and Timmis 2003]
were smooth, real-valued functions of this type (in one or several variables). For
smooth function landscapes like these, various hill-climbing algorithms (even
deterministic ones) and GAs are known to be quite successful at obtaining solu-
tions. An important difference between the BCA and GAs is that the BCA does
not use crossover. Here we propose a more challenging set of benchmark prob-
lems, namely the solution of Diophantine equations, which are likely to provide
a fertile testing ground for new algorithms.
In the next section we briefly describe the B-cell algorithm (BCA), before
explaining how to use it to solve Diophantine problems in the following section.
After presenting our experimental results, we conclude with various suggestions
for ways to modify and improve the BCA.
 
Search WWH ::




Custom Search