Biomedical Engineering Reference
In-Depth Information
of these rectangles is covered by the second class area. Therefore, this point will be
recognized as a point of the second class.
3.1.2 Random Subspace Classifier
When the dimension of input space n (Fig. 3.2 ) increases, it is necessary to increase
the gap between the thresholds of neurons h i j and l i j , so for large n , many thresholds
of neurons h i j achieve the higher limit of variable X i , and many thresholds of l i j
achieve the lower limit of variable X i . In this case, the corresponding neuron a i j
always has output 1 and gives no information about the input data. Only a small part
of neurons a i j can change the outputs. To save calculation time, we have modified
RTC classifiers including for each block from 1 to S not all the input variables
( X 1 ,
, X n ), but only a small part of them, selected randomly. This small number of
chosen components from the input vector we term the random subspace of the input
space. For each block j we select different random subspaces. Thus, we represent
our input space by a multitude of random subspaces.
The Random Subspace Classifier was developed for the general classification
problem in parameter spaces of limited dimensions. The structure of this classifier
is presented in Fig. 3.6 .
This classifier contains four neural layers: the input layer X = x 1 , x 2 ,
...
, x K ; the
...
intermediate layer GROUP = group 1 , group 2 ,
, group N ; the associative layer A =
...
a 1 , a 2 ,
, y m . Each neuron x i of the input layer
corresponds to the component of the input vector to be classified. Each group i of the
intermediate layer contains some quantity P of neuron pairs p ij . Each pair p ij
contains one ON neuron and one OFF neuron (Fig. 3.6 ). The ON neuron is active
if x r >
, a N ; the output layer Y = y 1 , y 2 ,
...
...
T OFFij , where T OFFij is the threshold
of the OFF-neuron and T ONij is the threshold of the ON neuron. Each pair p ij is
connected with a randomly selected neuron of the input layer X . All neuron thresh-
olds of the layer GROUP are selected randomly under condition T ONij <
T ONij . Each OFF neuron is active if x r <
T OFFij in
each pair. All the neurons of group i are connected with one neuron a i of the
associative layer A . A neuron a i is active if and only if all the neurons of group i
are active. The output of the active neuron equals 1. If the neuron is not active, its
output equals 0. Each neuron of the A layer is connected with all neurons of the
output layer Y . The training process changes the weights of these connections.
The training and the winner selection rules are the same as in the classifier LIRA.
The main difference between the classifier LIRA and the RSC classifier is the
absence of the group layer. Instead of each pair of the neurons in the layer GROUP ,
the classifier LIRA uses one connection, either ON type or OFF type, which could
be active or inactive. This modification permits an increase in the classification
speed. Another difference is related to the applications of these classifiers. We
applied the RSC classifier for the texture recognition and other problems where the
activities of the input neurons were calculated with a special algorithm of
the feature extraction. The classifier LIRA is applied directly to a raw image.
Search WWH ::




Custom Search