Information Technology Reference
In-Depth Information
Table 6.23 shows the clustering solutions obtained by LEGClust and by
Chameleon in the experiments with the datasets Iris, Olive, Wdbc and Wine.
The parameters used for each experiment are also shown in Table 6.23. The
final number of clusters is the same as the number of classes. Results with
LEGClust are better than those obtained with Chameleon, except for dataset
Olive.
6.5 Task Decomposition and Modular Neural Networks
This section addresses a complex type of classifiers: the modular neural net-
work (MNN) using task decomposition performed by a clustering algorithm.
We will specifically present the algorithmic description and results of MNNs
built with MEE MLPs (with Rényi's quadratic entropy as risk functional)
and with the LEGClust algorithm presented in Sect. 6.4 performing the task
decomposition [200].
Task decomposition is one of the strategies used to simplify the learning
process of any learning system. It basically consists of partitioning the input
space into several regions, this way decomposing the initial problem into dif-
ferent subproblems. This is done based on the assumption that these regions
possess different characteristics and so they should be learned by specialized
classifiers. By subsequent integration of the learning results, one is hopefully
able to achieve better solutions for the initial problem. Generally, task de-
composition can be obtained in three different ways: explicit decomposition
(the task is decomposed by the designer before training), class decomposition
(the decomposition is made based on the classes of the problem) and auto-
matic decomposition. When automatic decomposition is used, it can either
be made during the learning stage or before training the modules using a
clustering algorithm.
6.5.1 Modular Neural Networks
A modular neural network (MNN) is an ensemble of learning machines.
The idea behind this kind of learning structure is the divide-and-conquer
paradigm: the problem should be divided into smaller sub-problems that are
solved by experts (modules) and their partial solutions should be integrated
to produce a final solution (Fig. 6.37).
Ensembles of learning machines were often proved to give better results
than single learners. The proofs are mainly empirical [21, 54] but there are
some theoretical results [129, 8, 5] that support this assumption if some con-
ditions are satisfied.
Search WWH ::




Custom Search