Graphics Reference
In-Depth Information
Table 10.7 Average rankings
of the algorithms
by Friedman procedure
Algorithm
Ranking
AntMiner
3.125
CORE
3.396
Hider
2.188
SGERD
3.125
Targe t
3.167
Table 10.8 Results of the
Friedman and
Iman-Davenport tests
Friedman value
p -value
Iman-Davenport value
p -value
8.408
0.0777
2.208
0.0742
Table 10.9 Adjusted
p -values. Hider is the control
algorithm
I
Algorithm
Unadjusted p
p Holm
p Hoch
1
CORE
0.00811
0.032452
0.03245
2
Targe t
0.03193
0.09580
0.03998
3
AntMiner
0.03998
0.09580
0.03998
4
SGERD
0.03998
0.09580
0.03998
￿
Finally, in Table 10.9 the adjusted p -values are shown considering the best method
(Hider) as the control algorithm and using the three post-hoc procedures explained
above. The following analysis can be made:
- The procedure of Holm verifies that Hider is the best method with
α =
0
.
10,
α =
.
05.
- The procedure of Hochberg checks the supremacy of Hider with
but it only outperforms CORE considering
0
α =
.
05. In
this case study, we can see that the Hochberg method is the one with the highest
power.
0
10.6 Summarizing Comments
In this chapter we have introduced a series of non-commercial Java software tools, and
focused on a particular one named KEEL, that provides a platform for the analysis
of ML methods applied to DM problems. This tool relieves researchers of much
technical work and allows them to focus on the analysis of their new learning models
in comparison with the existing ones. Moreover, the tool enables researchers with
little knowledge of evolutionary computation methods to apply evolutionary learning
algorithms to their work.
We have shown the main features of this software tool and we have distinguished
three main parts: a module for data management, a module for designing experiments
with evolutionary learning algorithms, and a module educational goals. We have
also shown some case studies to illustrate functionalities and the experiment set up
processes.
 
 
Search WWH ::




Custom Search