Information Technology Reference
In-Depth Information
3. Form an objective function that sums the
n
2
products of elements of the first
matrix and elements of the row-and-column permuted second matrix.
The permutation is determined by the ordering of values of the variables
vector. (Remark: some readers might recognize this as a
matrix inner product
computed via the
matrix trace
of the usual matrix product).
For improved speed (at the cost of memory) we
memoize
values of the ob-
jective function. What that means is we record them once computed, so that
recomputation is done by fast lookup. Readers familiar with data structure
methods may recognize this as an application of
hashing
.
4. Call
NMinimize
on the objective function, using the above ranges, con-
straints, and input option settings.
5. Return the minimal value found, along with the permutation that gives rise to
that value.
QAP[mat1
,
mat2
,
cp
,
it
,
sp
,
sc ]:=Module[
QAP[mat1
,
mat2
,
cp
,
it
,
sp
,
sc ]:=Module[
{
{
len = Length[mat1]
,
obfunc
,
obfunc2
,
vars
,
x
,
nmin
,
vals
,
rnges
},
{
len = Length[mat1]
,
obfunc
,
obfunc2
,
vars
,
x
,
nmin
,
vals
,
rnges
len = Length[mat1]
,
obfunc
,
obfunc2
,
vars
,
x
,
nmin
,
vals
,
rnges
}
}
,
,
vars = Array[
x
,
len];
vars = Array[
x
,
len];
rnges = Map[
rnges = Map[
{
{
{
#
,
0
,
1
#
,
0
,
1
#
,
0
,
1
}
}
}
&
,
vars];
&
,
vars];
&
,
vars];
rnges = Map[
obfunc[vec :
obfunc[vec :
obfunc[vec :
{
{
{
Real
Real
Real
}
}
]:=obfunc2[Ordering[vec]];
}
]:=obfunc2[Ordering[vec]];
]:=obfunc2[Ordering[vec]];
obfunc2[perm ]:=obfunc2[perm]=
obfunc2[perm ]:=obfunc2[perm]=
To tal[Flatten[mat1
To tal[Flatten[mat1
To tal[Flatten[mat1
∗
∗
∗
permuteMatrix[mat2
,
perm]]];
permuteMatrix[mat2
,
perm]]];
permuteMatrix[mat2
,
perm]]];
{
{
{
nmin
,
vals
nmin
,
vals
nmin
,
vals
}
}
}
= NMinimize[obfunc[vars]
,
rnges
,
MaxIterations
= NMinimize[obfunc[vars]
,
rnges
,
MaxIterations
= NMinimize[obfunc[vars]
,
rnges
,
MaxIterations
→
→
→
it
,
it
,
it
,
Method
Method
Method
→{
→{
→{
DifferentialEvolution
,
SearchPoints
DifferentialEvolution
,
SearchPoints
DifferentialEvolution
,
SearchPoints
→
→
→
sp
,
CrossProbability
sp
,
CrossProbability
sp
,
CrossProbability
→
→
→
cp
,
cp
,
cp
,
ScalingFactor
ScalingFactor
ScalingFactor
→
→
→
sc
,
PostProcess
sc
,
PostProcess
sc
,
PostProcess
→
→
→
False
False
False
}
}
}
];
];
];
Clear[obfunc2];
Clear[obfunc2];
{
{
{
nmin
,
Ordering[vars/.vals]
nmin
,
Ordering[vars/.vals]
nmin
,
Ordering[vars/.vals]
}
}
}
]
]
]
Again we face the issue that this problem requires nonstandard values for options to the
DifferentialEvolution
method, in order to achieve a reasonable result. While
this is regretable it is clearly better than having no recourse at all. The idea behind
having
CrossProbability
relatively small is that we do not want many crossovers
in mating a pair of vectors. This in turn is because of the way we define a permutation.
In particular it is not just values but relative values across the entire vector that give
us the permutation. Thus disrupting more than a few, even when mating a pair of good
vectors, is likely to give a bad vector. This was also the case with the set partitioning
example we encountred earlier.
We saw that the baseline permutation (do nothing) and random permutations tend to
be far from optimal, and even a large random sampling will get us only about half way
from baseline to optimal. A relatively brief run with “good” values for the algorithm
parameters, on the other hand, yields something notably better. (In the next subsection
we explicitly show how one might use short
tuning runs
to find such parameter settings.)
Search WWH ::
Custom Search