Database Reference
In-Depth Information
The main difference between grading and arbiters is that arbiters use
information about the disagreements of classifiers for selecting a training
set; grading uses disagreement with the target function to produce a new
training set.
9.4 Classifier Dependency
This property indicates whether the various classifiers are dependent or
independent. In a dependent framework the outcome of a certain classifier
affects the creation of the next classifier. Alternatively each classifier is
built independently and their results are combined in some fashion. Some
researchers refer to this property as “the relationship between modules”
and distinguish between three different types: successive, cooperative and
supervisory [ Sharkey (1996) ] . Roughly speaking, “successive” refers to
“dependent” while “cooperative” refers to “independent”. The last type
applies to those cases in which one model controls the other model.
9.4.1
Dependent Methods
In dependent approaches for learning ensembles, there is an interaction
between the learning runs. Thus it is possible to take advantage of
knowledge generated in previous iterations to guide the learning in the
next iterations. We distinguish between two main approaches for dependent
learning, as described in the following sections [Provost and Kolluri (1997)].
9.4.1.1 Model-guided Instance Selection
In this dependent approach, the classifiers that were constructed in previous
iterations are used for manipulating the training set for the following
iteration (see Figure 9.7). One can embed this process within the basic
learning algorithm. These methods usually ignore all data instances on
which their initial classifier is correct and only learn from misclassified
instances.
The most well-known model-guided instance selection is boosting.
Boosting (also known as arcing adaptive resampling and combining) is
a general method for improving the performance of a weak learner (such
as classification rules or decision trees). The method works by repeatedly
running a weak learner (such as classification rules or decision trees), on
various distributed training data. The classifiers produced by the weak
learners are then combined into a single composite strong classifier in
Search WWH ::




Custom Search