Information Technology Reference
In-Depth Information
young and is amenable to an engineering mind set. One need not invent a new branch
of mathematics in order to make progress. At the end of the chapter we will see a few
directions that might, for the ambitious reader, be worthy of further pursuit.
Another caveat is that I make no claim as to Differential Evolution being the best
method for the problems we will discuss. Nor do I claim that the approaches to be seen
are the only, let alone best, ways to use it on these problems (if that were the case, this
would be a very short topic indeed). What I do claim is that Differential Evolution is
quite a versatile tool, one that can be adapted to get reasonable results on a wide range
of combinatorial optimization problems. Even more, this can be done using but a small
amount of code. It is my hope to convey the utility of some of the methods I have used
with success, and to give ideas of ways in which they might be further enhanced.
As I will illustrate the setup and solving attempts using Mathematica [13], I need
to describe in brief how Differential Evolution is built into and accessed within that
program. Now recall the general setup for this method. We have some number of vec-
tors, or chromosomes , of continuous-valued genes .They mate according to a crossover
probability, mutate by differences of distinct other pairs in the pool, and compete with
a parent chromosome to see who moves to the next generation. All these are as de-
scribed by Price and Storn, in their Dr. Dobbs Journal article from 1997 [10]. In par-
ticular, crossover and mutation parameters are as described therein. In Mathematica
the relevant options go by the names of CrossProbability , ScalingFactor ,
and SearchPoints . Each variable corresponds to a gene on every chromosome.
Using the terminology of the article, CrossProbability is the CR parameter,
SearchPoints corresponds to NP (size of the population, that is, number of chro-
mosome vectors), and ScalingFactor is F. Default values for these parameters are
roughly as recommended in that article.
The function that invokes these is called NMinimize . It takes a Method option that
can be set to DifferentialEvolution . It also takes a MaxIterations option
that, for this method, corresponds to the number of generations. Do not be concerned if
this terminology seems confusing. Examples to be shown presently will make it all clear.
One explicitly invokes Differential Evolution in Mathematica as follows.
NMinimize[objective , constraints ,
NMinimize[objective , constraints ,
variables , Method
variables , Method
variables , Method
→{
→{
→{
“DifferentialEvolution” , methodoptions
“DifferentialEvolution” , methodoptions
“DifferentialEvolution” , methodoptions
}
}
}
, otheropts]
, otheropts]
, otheropts]
Here methodoptions might include setting to nondefault values any or all of the
options indicated below. We will show usage of some of them as we present examples.
Further details about these options may be found in the program documentation. All of
which is available online; see, for example,
http://reference.wolfram.com/mathematica/ref/NMinimize.html
Here are the options one can use to control behavior of NMinimize . Note that
throughout this chapter, code input is in bold face
bold face
bold face, and output, just below the input, is
not.
Options[NMinimize`DifferentialEvolution]
Options[NMinimize`DifferentialEvolution]
{
1
CrossProbability
2 , InitialPoints
Automatic ,
PenaltyFunction
Automatic , PostProcess
Automatic ,
Search WWH ::




Custom Search