Information Technology Reference
In-Depth Information
the brain (Schmidt and Trainor 2001 ; Trochidis and Bigand 2013 ). There is strong
evidence that other spectral changes and brain regions are involved in emotional
responses. These include frontal midline theta power (Samler et al. 2007 ), beta-
power asymmetry (Schutter et al. 2008 ) and gamma spectral changes at right
parietal areas (Balconi and Lucchiari 2008 ; Li and Lu 2009 ).
A variety of research studies on EEG-based emotion recognition and classifi-
-
cation has been reported. These studies used different features and classi
cation
algorithms. Ishino and Hagiwara ( 2003 ) proposed a system based on neural net-
works. They applied FFT, WT, and PCA to extract features from EEG signals.
Consequently, neural networks were applied for classi
cation of four emotions (joy,
relax, sorrow, and anger) achieving accuracy of 67 %. Murugappan et al. ( 2008 )
used a lifting-based wavelet transform for feature extraction from measured EEG
signals. Next, Fuzzy C-Means clustering was employed for classi
cation of four
emotions (disgust, happy, surprise, and fear).
Ko et al. ( 2009 ) reported an EEG-based emotion recognition system. They
divided measured EEG signals into
five frequency ranges on the basis of power
spectral density and employed Bayesian network to predict the user
s emotional
states. Lim et al. ( 2010 ) proposed an EEG-based emotion recognition system. Using
measured EEG responses from 26 subjects during music listening, they extracted
features related to power spectral density, to power asymmetry of 12 electrode pairs
across different frequency bands and to the corresponding rational asymmetry. They
employed SVM classi
'
ers and reported a recognition accuracy of 82 %. The
reported results showed that features of spectral power asymmetry across different
frequency bands were the most sensitive parameter characterizing emotional states.
Petrantonakis and Hadjileontiadis ( 2010 ) employed higher order crossings for
feature extraction from EEG signals. Using the extracted features, four different
classi
ers (QDA, k-nearest neighbors, Mahalanobis distance, and SVM) were
tested for the classi
cation of six emotions (happiness, surprise, anger, fear, disgust,
and sadness). Depending on the classi
er, recognition accuracies from 63 to 83 %
were reported. Sourina and Liu ( 2011 ) proposed a real-time emotion recognition
and visualization system based on fractal dimension. They applied a fractal-based
algorithm and a valence
arousal emotion model. They calculated FD values from
the EEG signals and used a SVM classi
-
er for arousal and valence prediction for
six basic emotions.
Despite the substantial progress achieved in EEG-based emotion recognition
many issues need to be further improved (resolved). Relatively limited number of
emotional states can be recognized using EEG. The best performance reported so
far involves only six different emotions (Petrantonakis and Hadjileontiadis 2010 ).
Another important issue is the number of electrodes needed to extract an optimal
number of features. In current research, studies a big number of electrodes are used
resulting in complications both during the experiments and the processing of the
data. Research on the best features is needed to reduce the number of electrodes.
Solving the above constrains will allow real-time EEG-based emotion recognition
and realization of BCMI applications.
 
Search WWH ::




Custom Search