Information Technology Reference
In-Depth Information
of representing the functions that are characterized by good performance on the
training set (as underlined in (Anguita et al. 2011 , 2013 ; Koltchinskii 2006 )): these
functions will be most likely chosen by the learning process and, then, there seem
to be no reasons to search for more complex spaces. Moreover, note that few bits
are required in order to represent these functions, thus contemplating an infinite-
dimension space appears to be unmotivated by practical needs (Anguita et al. 2013 ).
In the SRM framework we have to search for the simplest hypothesis space (before
looking at the training set (Vapnik 1995 )) that guarantees the best trade off between
accuracy on the training set and complexity of the space. The introduction of a bit-
based hypothesis space is also encouraged by the basic ML idea to search for the
simplest class of functions capable of solving the problem under examination.
In Tables 5.2 and 5.3 , the confusion matrices of the MC-LK-SVM and the MC-
HFSVM with k
8 bits for the test data are depicted. In them, measures of overall
accuracy, sensitivity and specificity are also given and exhibit very similar values in
both approaches. Small variations are noticed in the recognition accuracy of dynamic
activities within the two SVM approaches such as in the walking downstairs and
walking upstairs activities, which also display some misclassifications mainly to
=
Table 5.2 Confusionmatrix of the classification results on the test data using the traditional floating-
point MC-LK-SVM
MC-LK-SVM
Activity WK WU WD SI ST LD Sensitivity (%) Specificity (%)
WK 109 0 5 0 0 0 95.61 97.63
WU 1 95 40 0 0 0 69.85 97.86
WD 15 9 119 0 0 0 83.22 93.03
SI 0 5 0 132 5 0 92.96 99.38
ST 0 0 0 4 108 0 100.00 99.26
LD 0 0 0 0 0 142 96.43 100.00
Accuracy 89.35
Note The bold diagonal highlights the most important part of the confusion matrix.
Table 5.3 Confusion matrix of the classification results on the test data using the traditional
floating-point MC-HF-SVM with k
=
8 bits
8
Activity WK WU WD SI ST LD Sensitivity (%) Specificity (%)
WK 109 2 3 0 0 0 95.61 97.63
WU 1 98 37 0 0 0 72.06 96.63
WD 15 14 114 0 0 0 79.72 93.81
SI 0 5 0 131 6 0 92.25 99.54
ST 0 1 0 3 108 0 96.43 99.11
LD 0 0 0 0 0 142 100.00 100.00
Accuracy 88.97
Note The bold diagonal highlights the most important part of the confusion matrix.
MC-HF-SVM k
=
 
Search WWH ::




Custom Search