Information Technology Reference
In-Depth Information
The task of a classifier is to assign v to the matching class. Each class represents a
single moving object. The label of such a class (e.g. the object's ID) is the classifi-
cation result, allowing for object re-identification.
Diversity of classification algorithms may be found in the literature. For the pur-
pose of the experiments described in the chapter, four classifications algorithms have
been selected. Some other algorithms, such as Support Vector Machines or the Bayes
classifier, have been rejected after the preliminary experiments due to their unsat-
isfactory performance. Where applicable, classifiers are set to solve a regression
problems in order to obtain both discrete class label and a real number that may be
interpreted as similarity of a feature vector to the class recognized.
Parameters of the employed classifiers have been selected based on initial exper-
iments and are meant to be representative for the general problem class (i.e. object
re-identification). Usage of many classifiers is motivated by the need to make descrip-
tor evaluation invariant to a classifier selection. The four classifiers used are briefly
described below.
k-Nearest Neighbours (kNN) . This is the simplest one of the four selected clas-
sifiers. For each feature vector of the considered objects, k closest feature vectors are
found in the training set, consisting of feature vectors of objects in another cameras
( k
3 value has been used in the experiments as a typical value that has performed
well in initial experiments). These nearest neighbours are found by calculating a
distance between vectors, with the Euclidean metric. Object re-identification is per-
formed by voting, i.e. an object is assigned to the class to which most of the found
nearest neighbours belong.
Artificial Neural Networks (ANN) . For the purpose of object identification, a
feed-forward, multi-layer perceptron (MLP) ANN [ 39 ] with one hidden layer is used.
This significantly reduces ANN training duration, comparing to larger quantity of
hidden layers. The ANN is trained using image features (in case of whole image-
based descriptors) or with features calculated for each interest point (for local image
features), and with their responses (classes—object IDs). Thus, the number of inputs
corresponds with the length of a feature vector. Based on initial experiments, the
number of neurons in the hidden layer is set to the half of ANN inputs. This fact
and only one hidden layer reduce possibility of ANN overtraining and losing its
generalisation properties. Bipolar sigmoid transfer functions are used in all neurons
as the most suited one to the ranges of input and output values. There are two outputs
from the network. An expected ANN output is equal to
=
(
1
,
1
)
for a vector belonging
to the valid object and
if it belongs to other objects. During classification, two
ANN outputs (instead of one) make possible to get information both on the vector
similarity and the ANN response reliability.
Gradient Boosted Trees (GBTree) . This classifier is built on the idea of Classi-
fication and Regression Trees [ 7 ]. Binary decision trees are trained using the object
features and their responses, a tree is constructed by processing each object feature
and finding an optimal split. Re-identification is done by processing the new object
features with the trained tree, starting from the topmost node, passing through each
split, and reaching the leaf which defines the class (ID of the re-identified object).
Importance of each object feature for the classification is assessed, so that features
(
1
,
1
)
Search WWH ::




Custom Search