Information Technology Reference
In-Depth Information
meta-learning algorithms, just one learning algorithm is selected to learn base-
level classifiers. The other approach includes voting, stacking [7] and cascad-
ing [8], which combines base-level classifiers from different learning algorithms.
METAL [9] and IDA [10] are also selective meta-learning approaches, selecting
a proper learning algorithm for a given data set with a heuristic score, which is
called meta-knowledge.
The constructive meta-level processing scheme [11] takes a meta-learning ap-
proach, where the objective process is controlled with meta-knowledge, as shown
in the upper part of Fig. 6.2. In this scheme, we construct meta-knowledge rep-
resented by method repositories. The meta-knowledge consists of information
about functional parts, restrictions on the combination of these functional parts,
and ways to re-construct object algorithms with the these functional parts.
6.4
Performance Comparisons of Learning Algorithms for
Rule Model Construction
To more accurately predict the human evaluation labels for a new rule based
on objective indices, we had to obtain a rule evaluation model with a higher
predictive accuracy in our rule evaluation support method.
In this section, we first present the results of empirical evaluations of a dataset
obtained from the result of meningitis data mining [12] and that of the eight rule
sets from eight UCI benchmark datasets [13]. Based on the experimental results,
we discuss the followings: the accuracy of rule evaluation models, the learning
curves of the learning algorithms, and the contents of the learned rule evaluation
models.
For evaluating the accuracy of the rule evaluation models, we compared the
predictive accuracies on the entire dataset and Leave-One-Out validation. The
accuracy of a validation dataset D is calculated with correctly predicted in-
stances Correct ( D )as Acc ( D )=( Correct ( D ) /|D| ) × 100, where |D| is the
size of the dataset. The recalls of class i on a validation dataset are cal-
culated using correctly predicted instances about the class Correct ( D i )as
Recall ( D i )=( Correct ( D i ) /
is the size of instances of
class i . Further, the precision of class i is calculated using the size of instances
which are predicted i as P recision ( D i )=( Correct ( D i ) /P redicted ( D i ))
|
D i |
)
×
100, where
|
D i |
100.
With regard to the learning curves, we obtained curves for the accuracies
of learning algorithms on the entire training dataset to evaluate whether each
learning algorithm could perform in the early stage of the rule evaluation process.
The accuracies of randomly sub-sampled training datasets were averaged with
10 trials on each percentage of the subset.
By observing the elements of the rule evaluation models on the meningitis
data mining results, we considered the characteristics of the objective indices,
which are used in these rule evaluation models.
In order to construct a dataset to learn a rule evaluation model, the values
of the objective indices were calculated for each rule by considering 39 objective
indices as shown in Table 6.1. Thus, each dataset for each rule set has the same
×
 
Search WWH ::




Custom Search