Information Technology Reference
In-Depth Information
Chapter 6
Applications
The present chapter describes applications of error entropy (and entropy-
inspired) risks, in a variety of classification tasks performed by more so-
phisticated machines than those considered in the preceding chapters. These
include multi-layer perceptrons (MLPs), recurrent neural networks (RNNs),
complex-valued neural networks (CVNNs), modular neural networks (MNNs),
and decision trees. We also present a clustering algorithm using a MEE-like
concept, LEGClust, which is used in building MNNs. Besides implementation
issues, an extensive set of experimental results and comparisons to non-EE
approaches are presented. Since the respective learning algorithms use the
empirical versions of the risks, the corresponding acronyms (MSE, CE, SEE,
and so forth) labeling tables and graphs of results refer from now on to the
empirical versions of the risks.
6.1 MLPs with Error Entropy Risks
There are many ways to apply the entropy concept to neural networks, par-
ticularly to MLPs. These are exemplified by the works that use entropy to
define and determine the complexity of a neural network [24,57,250], to gen-
erate a neural network [105, 226], and to perform optimization and pruning
of a given neural network architecture [170, 167].
The first applications of error entropy as an NN risk functional to be
minimized (the MEE approach), were developed by Príncipe and co-workers
as pointed out in Sect. 2.3.2. The extension of MEE to classification problems
handled by feed-forward MLPs was first developed by Santos et al. [198,
199, 202, 205] using Rényi's quadratic entropy. The application of Shannon's
entropy for the same purpose was subsequently studied by Silva et al. [217,
212].
The present section presents several results on the application of MEE
to train MLPs and on the assessment of their performance in real-world
 
Search WWH ::




Custom Search