Information Technology Reference
In-Depth Information
5.2.2 Connectionist Classifier
Artificial neural networks (ANN) are widely used in classification tasks due to their
ability to generalise: they draw conclusions from the available training data and
modify their topology by adjusting weights associated with interconnections in such
way that leads to increased performance [ 11 ].
Multilayer Perceptron (MLP), which is a popular example of feed-forward
unidirectional network topology, in the learning phase most often uses some ver-
sion of backpropagation training rule that aims at minimisation of the error at the
network output, for all outputs and all learning facts. Initial weights assigned to
connections strongly influence the training procedure and can cause distinctively
different predictive accuracy. To minimise that effect, multi-starting procedure is
used, in which there is employed repetitive learning after randomisation of weights,
and calculating average classification ratio. The performance of a connectionist clas-
sifier depends also on the number of hidden layers and neurons comprising them,
and these parameters are usually established in tests.
One of disadvantages of ANNs lies in knowledge representation: even though the
networks learn from the input data sets, the relationships detected are hidden in the
internal structure of the solution and cannot help in understanding of information.
When there are no significant inconsistencies in the training sets, neural networks
usually perform better for higher rather than lower numbers of inputs. For just few
inputs a network has trouble converging and learning can require many more runs
and still low classification accuracy is obtained. Choosing the best from generally
poor solutions, without clear understanding of patterns, can be next to impossible
and then still cause some inferior results.
Because of the general idea behind the concept of artificial neural networks, it is
more natural to establish irrelevance of some inputs by observing their connection
weights adjusted to some negligible values in learning, which can be then deleted
in pruning [ 21 , 23 ]. Thus backward elimination of features seems a better approach
than forward selection.
5.2.3 Rule-Based Classification
Rule classifiers enable very clear and straightforward expression of available
knowledge through decision rules of IF THEN …type. The premise (or condi-
tion) parts specify the conditions on attributes that, when met, indicate a particular
decision class (or a group of classes) to which the considered object should belong.
In multicriteria decision making [ 12 , 13 ] better results are obtained while
employing approaches that allow for not only nominal but also ordinal classifica-
tion, possible by detecting and exploiting partial orderings of values for all variables
[ 16 ]. One of such methodologies is Dominance-based Rough Set Approach (DRSA),
which is a modification of classical rough set processing [ 29 , 30 ], replacing the
indiscernibility relation with dominance [ 35 ].
 
Search WWH ::




Custom Search