Information Technology Reference
In-Depth Information
function, and uses real or discrete values for the classification cost status (CSV) of
every weak learner and Hamming loss as bound error. AdaBoost.MR relies on
ranking loss minimization, was designed for ranking correct classes at the top,
ranking all classes by their CSV and selecting the ones with the highest rank. Zhu
et al. [80] presented a stage-wise additive modeling using a multi-class exponen-
tial loss function, an algorithm that directly extends the AdaBoost algorithm to the
multi-class case without reducing it to multiple two-class problems. Their algo-
rithm combines weak classifiers and only requires the performance of each weak
classifier be better than random guessing (rather than 1/2). Another approach is
base on Support Vector Machines (SVM) [41, 65] with various kernels. This ap-
proach considers loss functions treating more than two classes and minimizes
them directly with various algorithms (10, 12, 32, 79]. These methods make it eas-
ier to analyze properties such as the consistency of Bayes error rate, but they are
not always feasible for a large number of classes and samples [76]. Zhang et al.
[78] present a combination of adaBoost.MH and co-EM semi-supervised multi-
class labeling scheme, for classification of human actions, from video, using spa-
tio-temporal features in two “views”: optical flow and histograms of oriented
gradients. They refer to human actions such as kiss , sit-down , answer phone , jog-
ging , hand clapping , and more. In this case the labeling is both time consuming
and prone to errors. The data are described as a finite hierarchical Gaussian mix-
ture model (GMM). Weighted multiple discriminant analysis (WMDA) is used in
order to enable the co-EM work efficiently, regardless of the number of features,
and to generate sufficient training data. A set of weak hypotheses are generated
and combined using linear combination.
6.3 Associative Classification
Another approach to multi-class classification is Associative Classification which
combines terms and techniques from the field of Data Mining with classification
methods. Data mining has dealt with finding association rules in data. Originally,
classifications aimed to find a single target class, while data mining aimed to find
any attribute in the data. In recent years, integrative methods have been presented
called associative classification, such as CBA [36], CMAR [34], CPAR [75], and
more.
7 Multi-label Classification
Multi-label learning refers to problems where a sample can be assigned to multiple
classes. It is an extension of the multi-class classification problem, where classes
are not mutually exclusive or separately occur. This differs from multi-class learn-
ing where every sample can be a signed to only one class even though the number
of classes is more than two. Multi-label learning tasks are common in real-world
problems. For instance, in text categorization, each document in a corpus may
Search WWH ::




Custom Search