Information Technology Reference
In-Depth Information
with the number of classifiers K . This complexity might be reduced by replacing
the generalised softmax function by well-tuned heuristics, but further research
is required to design such heuristics.
Two methods to find the
) have been introduced
to emphasise that in theory any global optimisation procedure can be used to
find the best set of classifiers. On one hand, a GA was described that operates
in a Pittsburgh-style LCS way, and on the other hand, an MCMC was employed
that samples p ( M|D ) and thus acts like a stochastic hill-climber. Both methods
are rather crude, but sucient to demonstrate the abilities of the optimality
criterion.
Using the introduced optimisation algorithms, it was demonstrated on a set
of regression tasks that the definition of the best set of classifiers i) is able to
differentiate between patterns in the data and noise, ii) prefers simpler model
structures over more complex ones, and iii) can handle data where the level
of noise differs for different areas of the input space. These features have not
been available in any LCS before, without the requirement of manually tuning
system parameters that influence not only the model structure search procedure
but also the definition of what resembles a good set of classifiers. Being able to
handle different levels of noise is a feature that has possibly not been available
in any LCS before, regardless of how the system parameters are tuned.
At last, the model structure search has been discussed in more detail, to
point out how it might be improved and modified to meet different requirements.
Currently, none of the two model structure search procedures facilitate any form
of information that is available from the probabilistic LCS model other than an
approximation to p (
M
that maximises p (
M|D
). Thus, the power of these methods can be improved
by using this additional information and by facilitating recent developments that
improve on the genetic operators.
Another downside of the presented methods is that they currently only sup-
port batch learning. Incremental learning can be implemented on both the mo-
del parameter and the model structure level, either of which were discussed
separately. While on the parameter level only minor modifications are required,
providing an incremental implementation on the model structure level, which
effectively results in a Michigan-style LCS, is a major challenge. Its solution
will finally bridge the gap between Pittsburgh-style and Michigan-style LCS,
which are, as presented here, just different implementations with the same aim
of finding the set of classifiers that explains the data best. Up until now, there
was no formally well-defined definition of this aim, and providing this definition
is the first step towards a solution to that challenge.
M|D
 
Search WWH ::




Custom Search