Information Technology Reference
In-Depth Information
x ¼ As :
ð 5 : 4 Þ
After application of ICA decomposition to the 64-channel EEG recorded from
their participants, Cong and colleagues produced spatial maps of the projections of
each independent component (IC) onto the scalp.
The musical piece played to the participants was then decomposed into a set of
temporal and spectral features. Features were selected that attempted to describe the
tonal and rhythmic features of the music. The ICs identi
ed from the EEG were
then clustered and used as the basis for a neural feature set, which was observed to
signi
cantly correlate with the acoustical features extracted from the musical piece.
5.5.2 Supervised Machine Learning Methods
In contrast to the unsupervised methods, supervised techniques need information
about data class membership in order to estimate the function
f : R N !f 1 ; þ 1 g
,
effectively amounting to learning the class membership or inter-class decision
boundary. This information comes in the form of a labelled training dataset where
each datum, representing in the feature space the object of interest, is accompanied by
the label denoting the class to which this object belongs. The class information can be
used either explicitly or implicitly in construction of the class membership function.
The methods may use labels explicitly if the latter are represented by numeric
values, typically
cation problem. In this case, the entire
training set, data, and their labels are used in training the classi
1 and 1 for a binary classi
er. Typically, this is
performed by feeding the data into the classi
er with randomly initialised parameter
and comparing the classi
er output representing the proposed class membership
with the data class labels. The discrepancies between obtained and the true labels
are accumulated and form the basis of an error cost function. Thus, the classi
er
training is cast as an optimisation problem
traversing the classi
er parameter
space in order to
find the optimal parameter con
guration, such that the error cost
function is minimised.
In classi
cation methods based on implicit use of class membership in training, the
class labels need not be numeric and are not used explicitly to adjust the classi
er
parameters during traversing the parameter space. Rather, the class information is
used by grouping data belonging to an individual class and extracting relevant
information groupwise in the process of constructing the classi
er decision boundary.
There are different ways in which classi
ers can be categorised. Discriminative
classi
ers need to use information from both classes in their training as they learn
by differentiating properties of data examples belonging to different classes. In
contrast, generative classi
ers assume some functional form of the class models
which are then estimated from the training set and subsequently used in order to
perform the classi
t of such an approach is that the
models can also be used to construct new data with properties consistent with the
class description. The discriminative methods do not afford such a use but construct
decision boundary making somewhat less explicit assumptions.
cation. The additional bene
 
Search WWH ::




Custom Search