Graphics Reference
In-Depth Information
Human emotions occur in many variations and are often not
directly accessible even for human experts when annotating affective
corpora. Hence, a severe issue in affective computing is that the
labeling procedure is inevitably expensive and time consuming.
It would be desirable to incorporate unlabeled data in the overall
classification process. This can be done either to improve a statistical
learning process or to support a human expert in an interactive labeling
process (Meudt et al., 2012). In order to integrate unlabeled data in
a supervised machine learning procedure, two different partially
supervised learning approaches have been applied, namely semi-
supervised learning and active learning . Semi-supervised learning refers
to group of methods that attempt to take advantage of unlabeled
data for supervised learning ( semi-supervised classification ) or to
incorporate prior information such as class labels, pair-wise constraints
or cluster membership ( semi-supervised clustering ). Active learning or
selective sampling (Settles, 2009) refers to methods where the learning
algorithm has control on the data selection, e.g. it can select the most
important/informative examples from a pool of unlabeled examples,
then a human expert is asked for the correct data label. The aim is to
reduce annotation costs. In our application—the recognition of human
emotions in human-computer interaction—we focus more on active
learning (Schwenker and Trentin, 2012; Abdel Hady and Schwenker,
2010). An iterative labeling process is displayed in Figure 3, where a
machine classifier proposes labels for different areas in a recording
for an expert to acknowledge. Based on this, new propositions can be
made by the system.
In affective computing, it is not likely that it is necessary to make
a decision for every given data sample extracted from a short time
analysis. Additionally data samples are delivered relatively often
compared to the expected lengths of the observed categories. Hence,
it is intuitive to use techniques of sample rejection, i.e. deciding (yes
or no) whether a certain confidence level has been achieved or not.
Various attempts have been made to introduce confidence-based
rejection criteria. Commonly thresholds-based heuristics are used
on probabilistic classifier outputs utilizing a distinct uncertainty
calculus, for instance doubt and conflict values computed through
Dempster's rule of combination in the very well-known Dempster-
Shafer theory of evidence (Thiel et al., 2005). Fusion architectures
which are making use of reject options not only have to deal with
missing signals and different sample rates but also with missing
decisions due to rejection.
For these reasons and also as mentioned above, a classification
architecture that is designed for a real-world application has to
Search WWH ::




Custom Search