Information Technology Reference
In-Depth Information
5.5.2.2 Case Study 4: Classifying Discrete Emotional States
Murugappan et al. ( 2010 ) used two classi
ers, K nearest neighbour (KNN) and
linear discriminant analysis (LDA), for classi
cation of discrete emotional states
from audio-visual stimuli. They used video clips from the international standard
emotional clips set (Yongjin and Ling 2005 ). They collected EEG from 62 channels
and used a surface Laplacian (SL)
filter to remove the artefacts. They then extracted
both standard (signal power, standard deviation, and variance) and novel features
from each channel based on a discrete wavelet decomposition into 3 frequency
bands: alpha, beta, and gamma. The wavelet features were functions of the relative
contribution of energy within a frequency band to the total energy across the three
bands.
The KNN classi
er takes a new data point and assigns it to the most frequent
class among a group of the k labelled training examples. The LDA works by
identifying a decision boundary hyperplane which maximises the inter-class dis-
tance, while simultaneously minimising within-class variance. The authors report
the highest classi
cation accuracy for discrete wavelet power-derived features
(83 % for KNN and 75 % for LDA) on the entire set of 62 channels with the
classi
cation accuracy dropping to 72 % for KNN and 58 % for LDA on a subset of
8 channels. The traditional features provided consistently worse results.
Machine learning methods have been used in very diverse ways ranging from
generic approaches to increase machine intelligence (Warwick and Nasuto 2006 ),
to analysis of pictorial (Ruiz and Nasuto 2005 ) or numeric data, such as EEG time
series for BCI applications (Aloise et al. 2012 ; Rezaei et al. 2006 ; Daly et al. 2011 ).
Lotte et al. ( 2007 ) provide an extensive discussion of supervised approaches used in
brain
computer interfaces.
-
5.6
Summary
Neurological data may be described in a large multitude of ways by a range of
different feature types. Therefore, often relationships between neurological data and
relevant measures of behaviour, stimuli, or responses may not be immediately
apparent. Such relationships may in fact be complex and comprised of multiple,
potentially weakly interacting components.
Machine learning provides a statistically sound framework for uncovering these
relationships. It has, therefore, been proposed by a number of authors as a suitable
mechanism for identifying neural correlates of music perception and emotional
responses to music.
We suggest that in order to construct a brain-computer music interface (BCMI)
based upon the interaction of the brain and a musical generator, an understanding of
these relationships is required. Machine learning provides a suitable framework
through which such an understanding may be acquired.
 
Search WWH ::




Custom Search