Information Technology Reference
In-Depth Information
Algorithm 2 : EX-SMO algorithm for solving Problem (6.7)
Data : D n ,
ʻ
, C ,
ʵ
and numerical precision
w ,
b
Result :
t
=
0, ʱ t
=
0 n , ʲ t
=
0 d ;
repeat
ʱ t + 1 , b t + 1 =
by running 2 iteration of the SMO algorithm (Keerthi
( ʱ , ʲ t )
argmin ʱ
P 1
et al. 2001) ;
ʲ t + 1 = max
1 d , min ( 1 ʻ)
1 YX T ;
1 d , ʱ
( 1 ʻ)
T
t
ʻ
ʻ
+
t=t+1;
until
;
+ ʲ t ʲ t 1
2
2
2
ʱ t ʱ t 1
<
ʻ
ʻ
X T Y ʱ t + ʲ t ,b = b t
return w =
with the EX-SMO algorithm, it is also possible
to exploit the proposed procedure for solving L1 (
By simply tuning the value of
ʻ
ʻ
0), L2 (
ʻ =
1) and L1-L2
< ʻ <
(0
1) SVM training problems.
6.4 Results
Open rooms for improving smartphone-based HAR exist. In this section, we explore
the performance of the various linear SVM models proposed in this chapter and
apply them to our HAR dataset. We focus on the evaluation of different aspects:
(i) the introduction of a larger set of gyroscope-based signals along with the ones
that come from the smartphone's accelerometer for the recognition of activities;
(ii) the selection of the most useful features and simpler, though effective, models
to make HAR more suitable for devices with limited battery life and computational
restrictions. (iii) the exploration of alternative linear SVM approaches that allows to
control sparsity and dimensionality reduction.
These three issues are targeted in this section in the following way: regarding
point (i), we fully exploit the HAR dataset, which contains gyroscope measures plus
a set of previously suggested features from the accelerometer (Bao and Intille 2004 ;
Khan et al. 2010 ; Karantonis et al. 2006 ) and verify the improvements that can be
achieved in the classification performance of the algorithm (Sect. 6.4.2 ). Concerning
issue (ii), we resort to effective SVM classifiers and implement two feature selection
mechanisms to allow faster and computationally non-intensive recognition: on the
one hand, we evaluate features discriminating on sensor type and domain (either
time or frequency) in Sect. 6.4.2 ; on the other hand, MC-L1-SVMmodels are imple-
mented, allowing to perform an automatic selection of significant features emerging
from the training set while keeping the appealing classification performance of con-
ventional SVMs (Sect. 6.4.3 ). Finally, point (iii) is dealt with the analysis of the
MC-L1-L2-SVM approach in order to find the ideal trade off point that combines the
effectiveness of the L2-SVM and the feature selection characteristics of L1-SVMs,
this is presented in Sect. 6.4.4 .
 
 
Search WWH ::




Custom Search