Geology Reference
In-Depth Information
Figure 1. Flowchart of MCGA
T
W P
1 =
(14)
identical to the count of the classes, is denoted by
N C . Based on these assignments, the PNN is cre-
ated with zero error on training samples (Wasser-
man, 1993). After training, a testing set of N T new
data samples are used to test the generalization
of the PNN. When a new input vector is pre-
sented, the RBF layer computes the distances
between it and the training samples. Then RBF
activation function is used to produce a vector
whose elements indicate how close the input vec-
tor is to the training sample. Thus, the RBF layer
neurons with weight vectors quite far from the
input vector produce output values near zero,
while neurons with weight vectors quite close to
it provide output values near one. Typically, sev-
eral neurons may be active to varying degrees.
The competitive layer sums these contributions
for each class of inputs to produce a vector of
probabilities. Finally, the second layer produces
W T
2 =
(15)
where W 1 , W 2 , P and T are the first layer weight,
the second layer weight, input and desired outputs
matrices, respectively.
The PNN is mainly used for classification
problems. To train the PNN a supervised train-
ing is accomplished. Typically, a PNN consists
of an input layer, a RBF layer, and a competitive
(C) layer.
During the training stage, a training set of N S
data samples is used. The number of the neurons
in the first layer is identical to N S . Also, the weight
matrix of this layer is set to the transpose of the
input matrix (Wasserman, 1993). The number of
the neurons in the competitive layer, which is
Search WWH ::




Custom Search