Information Technology Reference
In-Depth Information
Table 9.10
Performance and settings used in the postoperative patient problem.
Number of runs
100
Number of generations
10,000
Population size
30
Number of training instances
60
Number of testing instances
30
Number of attributes
8
Attribute set
A-H
Terminal set / Classes
a b c
Maximum arity
5
Head length
10
Gene length
51
Mutation rate
0.044
Inversion rate
0.1
IS transposition rate
0.1
RIS transposition rate
0.1
One-point recombination rate
0.3
Two-point recombination rate
0.3
Fitness function
Eq. (3.8)
Average best-of-run fitness
47.14
therefore, very good solutions to the postoperative problem. Indeed, these
results are considerably better than the 48% accuracy achieved by the rules
induced with a learning system based on rough sets (Budihardjo et al. 1991).
Let's now see how the decision trees of gene expression programming can
be pruned to create rules that are both accurate and compact.
9.4 Pruning Trees with Parsimony Pressure
We know already that all GEP systems learn better if slightly redundant con-
figurations are used (see a discussion of The Role of Neutrality in Evolution
in chapter 12). And the decision trees of gene expression programming are
no exception: they also grow better when they are allowed to experiment
with all kinds of configurations, which is only possible if neutral regions are
around. So, it is a good strategy to maximize evolution by using slightly
redundant organizations while simultaneously applying a little pressure on the
Search WWH ::




Custom Search