Information Technology Reference
In-Depth Information
Fig. 5.2 MC-HF-SVM using LUTs for probability estimates
5.3 HF-SVM and Statistical Learning Theory
In the last decades several works have been devoted to adapt ML approaches to spe-
cific hardware platforms (Epitropakis et al. 2010 ; Genov and Cauwenberghs 2003 ;
Irick et al. 2008 ; Lee et al. 2003 ) and, in particular, to analyze the effects of para-
meter quantization on the training and FFPs (Anguita et al. 2007 ; Lesser et al. 2011 ;
Neven et al. 2009 ). Motivations for these activities are usually linked to application-
specific requirements but also to the basic principle of the SLT (Vapnik 1995 ) where
we have to search for the simplest model that correctly classifies the available data.
The introduction of bit-based hypothesis spaces brings widespread benefits on the
learning process of classifiers (i.e. classes of functions where models are described
through a limited number of bits). This is due to the fact that reducing the number of
bits largely influences the complexity of the hypothesis space (Anguita et al. 2013 ),
which is a key issue in Machine Learning as underlined in (Shawe-Taylor et al. 1998 ;
Bartlett et al. 2005 ). If we are able to reduce the complexity of the hypothesis space
without affecting the ability of the algorithm to learn a function with low empirical
error, in practice, we are able to learn more effectively (Herbrich and Williamson
2003 ; Shawe-Taylor et al. 1998 ).
In this section we investigate how the adoption of a fixed-point arithmetic affects
the generalization ability of a classifier in the form of Eq. ( 5.6 ). In order to do this
we describe each parameter
ʲ i as an integer value of k bits:
k
1
b i 2 j
ʲ i
=
,
(5.9)
j
=
0
 
Search WWH ::




Custom Search