Information Technology Reference
In-Depth Information
Table 6.3 Confusion matrices for the linear methodMC-L1-SVMwith D 2 (top) and D 2 T (bottom)
Activity
WK WU WD
SI
ST
LD
Sensitivity (%)
Specificity (%)
MC-L1-SVM— D 2
WK
493
2
1
0
0
0
99.40
98.98
WU
23
445
1
2
0
0
94.48
99.56
WD
2
5
412
1
0
0
98.10
99.92
SI
0
4
0
428
59
0
87.17
99.35
ST
0
0
0
13
519
0
97.56
97.56
LD
0
0
0
0
0
537
100.00
100.00
Accuracy
96.17
MC-L1-SVM— D 2 T
WK 492 3 1 0 0 0 99.19 99.06
WU 18 452 0 1 0 0 95.97 99.80
WD 5 1 413 1 0 0 98.33 99.92
SI 0 1 1 436 53 0 88.80 99.31
ST 0 0 0 15 517 0 97.18 97.81
LD 0 0 0 0 0 537 100.00 100.00
Accuracy 96.61
Note The bold diagonal highlights the most important part of the confusion matrix.
Table 6.4 Comparison between MC-L2-SVM and MC-L1-SVM regarding accuracy and dimen-
sionality reduction
Feature group
d
MC-L2-SVM
MC-L1-SVM
Accuracy (%)
Accuracy (%)
d −|S|
ˁ
(%)
ˁ
(%)
AGT
272
96.06
96.61
168
61.76
19.24
AGTF
561
96.54
96.17
239
42.6
12.03
and number of selected features (remembering that L2 procedures do not perform
any dimensionality reduction).
In particular, we considered only the groups of features that showed to be necessary
for HAR purposes according to the results derived in the previous section. These are
the AGTF and AGT subsets which correspond to
D 2 T respectively. They
obtained classification accuracies equal to 96.17 and 96.61%. It is worth noting
that L1 models perform comparably to L2 models with our datasets, furthermore
allowing to remarkably reduce the dimensionality of the problem. In the literature
it is common to find L2 models to outperform. Therefore, these findings show the
classification performance of L1 is probably due to the intrinsic filtering of noisy
features, which negatively afflict L2 classifiers.
Dimensionality reduction is also an important aspect in the comparison of the
L1- and L2-Norm algorithms. As we can see in Table 6.4 , MC-L1-SVM achieves
an effective reduction in the number of features up to 42% of the total number
D 2 and
 
Search WWH ::




Custom Search