Information Technology Reference
In-Depth Information
3.3 Learning systems
In (Bradzil et al., 2003) is described a meta-learning method to support selection of
candidate learning algorithms. Bradzil et al. use the Instance-Based Learning (IBL) approach
because IBL has the advantage that the system is extensible; once a new experimental result
becomes available, it can be easily integrated into the existing results without the need to
reinitiate complex re-learning. In this work a k-Nearest Neighbor (k-NN) algorithm to
identify the datasets that are most similar to the one is used. The distance between datasets
is assessed using a relatively small set of data characteristics, which was selected to
represent properties that affect algorithm performance; it is used to generate a
recommendation to the user in the form of a ranking. The prediction, is constructed by
aggregating performance information for the given candidate algorithms on the selected
datasets. They use a ranking method based on the relative performance between pairs of
algorithms. This work shown how can be exploited meta-learning to pre-select and
recommend one or more classification algorithms to the user. They claimed that choosing
adequate methods in a multistrategy learning system might significantly improve its overall
performance. Also it was shown that meta-learning with k-NN improves the quality of
rankings methods in general.
3.4 Knowledge discovery and data mining
In (Hilario & Kaousis, 2000) is addressed the model selection problem in knowledge
discovery systems, defined as the problem of selecting the most appropriate learning model
or algorithm for a given application task. In this work they propose framework for
characterizing learning algorithms for classification as well as their underlying models,
using learning algorithm profiles. These profiles consist of metalevel feature-value vectors,
which describe learning algorithms from the point of view of their representation and
functionality, efficiency, resilience, and practicality. Values for these features are assigned
on the basis of author specifications, expert consensus or previous empirical studies.
Authors review past evaluations of the better known learning algorithms and suggest an
experimental strategy for building algorithm profiles on more quantitative grounds. The
scope of this paper is limited to learning algorithms for classification tasks, but it can be
applied to learning models for other tasks such as regression or association.
In (Kalousis & Theoharis, 1999) is presented an Intelligent Assistant called NOEMON,
which by inducing helpful suggestion from background information can reduce the effort
in classifier selection task. For each registered classifier, NOEMON measures its
performance in order to collect datasets for constituting a morphologic space. For suggest
the most appropriate classifier, NOEMON decides on the basis of morphological
similarity between the new dataset and the existing collection. Rules are induced from
those measurements and accommodated in a knowledge database. Finally, the
suggestions on the most appropriate classifier for a dataset are based on those rules. The
purpose of NOEMON is to supply the expert with suggestions based on its knowledge on
the performance of the models and algorithms for related problems. This knowledge is
being accumulated in a knowledge base end is updated as new problems as are being
processed.
Search WWH ::




Custom Search