Information Technology Reference
In-Depth Information
Many authors use machine learning techniques to analyse EEG for BCI appli-
cations, and the number of studies investigating automated emotion recognition is
growing (Lin et al. 2010 , 2011 ; Wang et al. 2013 ; Sohaib et al. 2013 ).
The following two cases studies will illustrate the use of the supervised machine
learning methods used to study emotional responses to music using EEG.
5.5.2.1 Case Study 3: SVMs for Identifying Neural Correlates
of Emotion
Sohaib et al. ( 2013 ) used the support vector machine as a method for automatic
emotion recognition from the EEG. The increased use of computer technology by
humans and hence raise in prominence of human computer interaction motivates
incorporating automated emotion recognition into such technologies. The authors
used the IAPS as emotional stimuli. Images in the IAPS database were tagged with
their emotional content along dimensions of valence, arousal, and dominance,
although the authors only considered valence and arousal in their study. EEG was
recorded while the subjects viewed the images and assessed their emotion using a
self-assessment manikin.
The EEG was recorded at 2,048 Hz from 6 channels
Fp1, Fp2, C3, C4, F3, and
F4
and referenced to Cz and were preprocessed using ICA in order to remove
artefacts. Four features for each channel (minimum, maximum, mean, and standard
deviation) were obtained forming 24-dimensional feature vectors which were then
used to train the classi
ers.
The support vector machine belongs to the family of discriminative classi
ers. It
is also a member of the so-called kernel methods, because of its use of kernel
functions which allow it to form a map of the input data space into a feature space
where the classi
cation is attempted. The mapping is only done implicitly, and
hence, the computational costs are associated with the dimensionality of the input
space, even though the classi
cation is performed in the potentially highly
dimensional feature space. Additionally, because of this feature mapping, although
the decision boundary in the input data space is nonlinear, it is an image (via feature
mapping) of the decision boundary which is a linear hyperplane in the feature
space. The strength of the SVM comes from using this kernel trick
as the problem
is formulated as linear classi
cation is linear or
not and what kind of nonlinearities are involved depends on the choice of the
kernel. Moreover, the problem of
cation yet whether the actual classi
finding a hyperplane in the feature space is
formulated as a constrained optimisation where a hyperplane is sought that maxi-
mises the separation margin between the classes. This also gives rise to the selection
of the data which are important for ultimate construction of the optimal separating
hyperplane, the support vectors, which lend the name to the entire classi
er.
Sohaib et al. ( 2013 ) report that their SVM obtained classi
cation rates of over
56 % on the binary classi
cation problem (negative/positive arousal/valence),
higher than four other classi
cation rate was raised to an
average of over 66 % when the 15 subjects were split into 3 equal groups and each
group classi
ers used. The classi
ed separately.
 
Search WWH ::




Custom Search