Information Technology Reference
In-Depth Information
Output
Prediction
Input Space
c 2
c 3
c 1
Fig. 2.4. Classifiers as localised maps from the input space into the output space.
The illustration shows three classifiers c 1 , c 2 ,and c 3 that match different areas of
the input space. Their location in the input space is determined by the classifier's
condition, which, in this example, is given by intervals on the coordinates of the input
space. Each classifier provides an input-dependent prediction of the output. In this
illustration, the classifiers form their prediction through a linear combination of the
input space coordinates, thus forming planes in the input/output space.
for classification tasks. Therefore, the concept of classifiers providing a localised
model that maps inputs to outputs generalises over all LCS tasks, which will be
exploited when developing the LCS model.
In the light of the above, calling the localised models “classifiers” is a mis-
nomer, as they are not necessarily classification models. In fact, their use for
classification has only emerged recently, and before that they have been mostly
represented by regression models. However, to make this work easily accessi-
ble, the LCS jargon of calling these models “classifiers” will be maintained.
The reader, however, is urged to keep in mind that this term is not related to
classification in the sense discussed in this topic.
2.3.4
Recovering the Global Prediction
Several classifiers can match the same input but each might provide a different
predictions for its output. To get a single output prediction for each input,
the classifiers' output predictions need to be combined, and in XCS and all its
variants this is done by a weighted average of these predictions, with weights
proportional to the fitness of the associated classifiers [237, 238].
The component responsible for combining the classifier predictions in XCS and
LCS has mostly been ignored, until is was shown that combining the classifier
predictions in proportion to the inverse variance of the classifier models gives a
lower prediction error than when using the inverse fitness [83]. At the same time,
Brown, Kovacs and Marshall have demonstrated that the same component can
be improved in UCS by borrowing concepts from ensemble learning [29].
 
Search WWH ::




Custom Search