Information Technology Reference
In-Depth Information
One of the best solutions designed with the GEA-B algorithm was found in
generation 670 of run 58 (the sub-ETs are linked by addition):
-.d3./.d4.+.*.d8 .d2.d4.d1.d2.d6.d1.d8.d0
+.+.d7.d5.+.-.d1 .d4.d7.d5.d3.d3.d3.d1.d5
+.d0.*.d7.d6.d2.d2 .d6.d1.d8.d2.d2.d1.d2.d4
(5.23a)
This model classifies correctly 341 out of 350 fitness cases in the training set
and 170 out of 174 sample cases in the testing set, which corresponds to a
training set classification error of 2.571% and a classification accuracy of
97.429%, and a testing set classification error of 2.299% and a classification
accuracy of 97.701%. More formally, the model (5.23) can be translated into
the following C++ function:
int apsModel(double d[])
{
const double ROUNDING_THRESHOLD = 0.5;
double dblTemp = 0.0;
dblTemp = (d[3]-(d[4]/((d[2]*d[4])+d[8])));
dblTemp += ((d[5]+((d[4]-d[7])+d[1]))+d[7]);
dblTemp += (d[0]+(d[7]*d[6]));
return (dblTemp >= ROUNDING_THRESHOLD ? 1:0);
} (5.23b)
One of the best solutions created with the GEP-NC algorithm was found
in generation 754 of run 47. Its genome is shown below (the sub-ETs are
linked by addition):
d1.*.+.d4.-.-.+ .d1.c2.c1.d2.c1.d4.d0.d0
*.d0.d5.d4.*.*.d6 .d8.d3.c0.d5.c1.d8.c4.d5
*.c0.*.+.d8.d3.d6 .d1.c1.d6.d5.c1.d4.c4.c0
(5.24a)
This model classifies correctly 340 out of 350 fitness cases in the training set
and 171 out of 174 fitness cases in the testing set. This corresponds to a
training set classification error of 2.857% and a classification accuracy of
97.143%, and a testing set classification error of 1.724% and a classification
accuracy of 98.276%. Thus, this model generalizes slightly better than the
model (5.23) designed by the GEA-B algorithm. More formally, the model
(5.24a) can be expressed by the following C++ function:
Search WWH ::




Custom Search