Information Technology Reference
In-Depth Information
genetic algorithm. A classifier is considered as being accurate if its mean abso-
lute prediction error over all matched observations is below the minimum error 1
threshold 0 . The genetic algorithm provides accurate classifiers that match lar-
ger areas of the input space with more reproductive opportunity. However, overly
general classifiers, that is, classifiers that match overly large areas of the input
space, will feature a mean absolute error that is larger than 0 , and are not
accurate anymore. Thus, the genetic algorithm “pushes” towards more general
classifiers, but only until they reach 0 [53]. In combination with the competition
between classifiers that match the same input, XCS can be said to aim at finding
the smallest non-overlapping set of accurate classifiers. From this perspective we
could define an optimal set of classifiers that is dependent on 0 . However, such
a definition is not very appealing, as i) it is based on an algorithm, rather than
having an algorithm that is based on the definition; ii) it is based solely on in-
tuition; iii) the best set of classifiers is fully determined by the setting of 0 that
might depend on the task at hand; and iv) 0 is the same for the whole input
space, and so XCS cannot cope with tasks where the noise varies for different
areas of the input space.
YCS [33] was developed by Bull as a simplified version of XCS such that its
classifier dynamics can be modelled by difference equations. While it still measu-
res the mean absolute prediction error of each classifier, it defines the fitness as
being inversely proportional to this error, rather than using any accuracy con-
cept based on some error threshold. Additionally, its genetic algorithm differs
from the one used in XCS in that it selects classifiers from the whole set rather
than only from the set that matches the current input. Having a fitness that is
inverse to the error will make the genetic algorithm assign a higher reproduc-
tive opportunity to low-error classifiers that match many inputs. How low this
error has to be depends on the error of other competing classifiers in the set,
and on the maximum number of classifiers allowed, as that number determines
the number of classifiers that the genetic algorithm aims at assigning to each
input. Due to these dependencies it is dicult to define which set of classifiers
YCS aims at finding, particularly as it depends on the dynamics of the genetic
algorithm and the interplay of several system parameters. Its pressure towards
more general classifiers comes from those classifiers matching more inputs and
thus updating their error estimates more quickly, which gives them an initial
higher fitness than more specific classifiers. However, this pressure is implicit
and weaker than in XCS, which is easily seen in Fig. 1(a) of [33], where general
and specific, but equally accurate, classifiers peacefully and stably co-exist in
the population. It can only be stated that YCS supports classifiers that match
larger areas of the input space, but only up until their errors get too large when
compared to other classifiers in the set.
CCS [153, 154], in contrast, has a very clear definition of what types of clas-
sifiers win the competition in a classification task: it aims at maximally general
1 The term minimum error for 0 is a misnomer, as it specifies the maximum error
that classifier can have to still be accurate. Thus, 0 should be called the maximum
admissible error or similar.
Search WWH ::




Custom Search