Digital Signal Processing Reference
In-Depth Information
results: only three samples were not correctly classified. Note that we
use the same sample set for training and for testing the net; this is
due to the fact of the low number of samples did not allow testing
techniques like jackknifing or splitting the sample set into training and
testing samples. Therefore, we also used a simple perceptron and not a
more complex multi layered perceptron; its simple structure resulted in
a linear separation of the given sample set.
The perceptron used had a Heaviside activation function and an
additional bias for threshold shifting. We trained the network using 1000
epochs, although convergence was achieved after less than 50 epochs. We
got a reconstruction error of only three samples.
The weight matrix of the learned perceptron converged to
w =( 0 . 047
0 . 66
3 . 1 . 010
0 . 010
0 . 010
0 . 029
0 . 010
1 . 0
0 . 32
< 10 4
) .
0 . 059
4 . 1
with bias w 0
2 . 1, where we had already multiplied w by the
dewhitening PCA matrix. If we normalize the signals to unit variance,
we get normalized weights
=
w =( 2 . 7
0 . 69
4 . 4 . 7
0 . 17
5 . 6 . 40
0 . 19
3 . 1
6 . 0
1 . 71 . 81 . 6)
and w 0 =6 . 0. These entries in w can be used to detect parameters that
have significant influence on the separation of the perceptron; these are
mainly parameters 1 (RANTESRO), 3 (RANBALLY), 4 (IP101RO), 6
(IP102RO), 9 (CD4/CD8), 10 (CX3CD8). By setting the other param-
eters to zero, we constructed a new perceptron
w =( 0 . 047
0
3 . 20 . 010
0
0 . 010
0
0
1 . 04
0 . 32
)
0
0
0
and w 0
2 . 0, again given for the non normalized source data. If we
apply the data to this new reduced perceptron, we get a reconstruc-
tion error of five samples, which means that even this low number of
parameters seems to distinguish the diagnosis quite well.
Further information can be obtained from the nets if we look at
the sample classification without applying the signum function. We get
=
Search WWH ::




Custom Search