Information Technology Reference
In-Depth Information
int apsModel(double d[])
{
const double ROUNDING_THRESHOLD = 0.5;
double dblTemp = 0.0;
dblTemp = d[1];
dblTemp += (d[0]*d[5]);
dblTemp += (0.938233*((d[3]+d[6])*d[8]));
return (dblTemp >= ROUNDING_THRESHOLD ? 1:0);
} (5.24b)
It is worth pointing out how compact this highly accurate model is: in-
deed, in terms of nodes, it requires just 11 nodes. Also interesting is that only
six of the nine attributes are used in this model. Note also that just one of the
five numerical constants available in this experiment was integrated in the
fully expressed program.
One of the best solutions created with the GEP-RNC algorithm was dis-
covered in generation 758 of run 93. Its genes and their respective arrays of
random numerical constants are shown below (the three sub-ETs are linked
by addition):
Gene 1: *./.+.d1./.+.d5.d4.d0.d5.d4.d0.d6.d5.?.3.0.2.7.8.9.7.5
C 1 : {-1.08551, 0.060577, -0.614655, -0.041046, 0.717499,
1.647156, 0.255981, -0.421814, 0.44043, -0.919983}
Gene 2: *.d6.+.*.d5.d8.+.d0.d2.d6.d2.d5.d5.d5.d7.1.4.7.3.6.8.6.8
C 2 : {-1.07431, 1.53714, -0.861328, -0.014801, -1.192108,
-1.983307, 1.749695, 0.403107, 0.377991, -0.106109}
Gene 3: *.d3.*.d3.d1.d3.*.d7.d4.d7.d5.d8.d0.d5.d4.4.6.1.8.2.5.0.9
C 3 : {-1.08551, 0.060577, 1.801148, 0.737946, -1.972596,
1.647156, 0.255981, 0.696961, -0.231292, 1.210998}
(5.25a)
This model classifies correctly 341 out of 350 fitness cases in the training set
and 170 out of 174 fitness cases in the testing set. This corresponds to a
training set classification error of 2.571% and a classification accuracy of
97.429%, and a testing set classification error of 2.299% and a classification
accuracy of 97.701%. Thus, this model is as good at generalizing as the
model (5.23) designed with the GEA-B algorithm. More formally, it corre-
sponds to the following C++ function:
Search WWH ::




Custom Search