Information Technology Reference
In-Depth Information
4. Call NMinimize , passing it obfun[vars] as objective function.
5. Return the minimum value found, and the two complementary subsets of the
original integer set
{
1 ,..., n
}
that give rise to this value.
getHalfSet[n , opts
getHalfSet[n , opts
getHalfSet[n , opts
Rule]:=Module[
Rule]:=Module[
Rule]:=Module[
{
{
{
vars , xx , ranges , nmin , vals
vars , xx , ranges , nmin , vals
vars , xx , ranges , nmin , vals
}
}
}
,
,
,
vars = Array[xx , n ];
vars = Array[xx , n ];
ranges = Map[
{
# , 0 , 1
}
& , vars];
ranges = Map[
ranges = Map[
{
{
# , 0 , 1
# , 0 , 1
}
}
& , vars];
& , vars];
{
nmin , vals
}
= NMinimize[spfun[vars] , ranges , opts];
{
{
nmin , vals
nmin , vals
}
}
= NMinimize[spfun[vars] , ranges , opts];
= NMinimize[spfun[vars] , ranges , opts];
{
nmin , Map[Sort , splitRange[vars/.vals]]
}
]
{
{
nmin , Map[Sort , splitRange[vars/.vals]]
nmin , Map[Sort , splitRange[vars/.vals]]
}
}
]
]
As in previous examples, we explicitly set the method so that we can more readily
pass it nondefault method-specific options. Finally, we set this to run many iterations
with a lot of search points. Also we turn off post-processing. Why do we care about
this? Well, observe that our variables are not explicitly integer valued. We are in ef-
fect fooling NMinimize into doing a discrete (and in fact combinatorial) optimization
problem, without explicit use of discrete variables. Hence default heuristics are likely
to conclude that we should attempt a “local” optimization from the final configuration
produced by the differential evolution code. This will almost always be unproductive,
and can take considerable time. So we explicitly disallow it. Indeed, if we have the
computation time to spend, we are better off increasing our number of generations, or
the size of each generation, or both.
Timing[
Timing[
Timing[
=
getHalfSet[100 , MaxIterations
{
{
{
min ,
min ,
min ,
{
{
{
s1 , s2
s1 , s2
s1 , s2
}}
}}
}}
=
=
getHalfSet[100 , MaxIterations
getHalfSet[100 , MaxIterations
10000 ,
10000 ,
10000 ,
Method
Method
Method
→{
→{
→{
DifferentialEvolution , CrossProbability
DifferentialEvolution , CrossProbability
DifferentialEvolution , CrossProbability
. 8 ,
. 8 ,
. 8 ,
SearchPoints
SearchPoints
SearchPoints
100 , PostProcess
100 , PostProcess
100 , PostProcess
False
False
False
}
}
}
]]
]]
]]
2.006223098760529`* -7 ,
{
2134 . 42 ,
{
{{
1 , 2 , 4 , 6 , 7 , 11 , 13 , 15 , 16 , 17 , 19 , 21 , 23 , 25 , 26 , 27 , 31 , 34 ,
37 , 41 , 43 , 44 , 45 , 47 , 50 , 51 , 52 , 54 , 56 , 66 , 67 , 69 , 72 , 73 ,
75 , 77 , 78 , 79 , 80 , 86 , 87 , 88 , 89 , 90 , 91 , 93 , 96 , 97 , 98 , 100
}
,
{
3 , 5 , 8 , 9 , 10 , 12 , 14 , 18 , 20 , 22 , 24 , 28 , 29 , 30 , 32 , 33 , 35 , 36 ,
38 , 39 , 40 , 42 , 46 , 48 , 49 , 53 , 55 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ,
65 , 68 , 70 , 71 , 74 , 76 , 81 , 82 , 83 , 84 , 85 , 92 , 94 , 95 , 99
}}}}
We obtain a fairly small value for our objective function. I do not know if this in fact
the global minimum, and the interested reader might wish to take up this problem with
an eye toward obtaining a better result.
A reasonable question to ask is how would one know, or even suspect, where to set
the CrossProbability parameter? A method I find useful is to do “tuning runs”.
What this means is we do several runs with a relatively small set of search points and a
fairly low bound on the number of generations (the MaxIterations option setting,
in NMinimize ). Once we have a feel for which values seem to be giving better results,
we use them in the actual run with options settings at their full values. Suffice it to say
Search WWH ::




Custom Search