Information Technology Reference
In-Depth Information
Table 12.9
Visual descriptor candidates suited for multi-camera object identification
Dataset
Best visual descriptors
PERSON
CLDTrans , CLD , EHD , MomentInvGPSO , OpponentSIFT
VEHICLE
HistFull , MomentInvGPSO , VertTrace , OpponentSIFT , SURF64
Both
CLDTrans , MomentInvGPSO , HistFull , EHD , OpponentSIFT
ing values and, simultaneously, by high or average stability. Analysing the results
in Fig. 12.20 it can be noticed that for the PERSON category, feature extractors that
include spatial dependencies outperform other evaluated descriptors. This relation
is reasonable since humans are typically characterized by a higher colour diver-
sity than vehicles. In comparison, the most effective descriptors for VEHICLE
dataset are based on local image features as well as on the general image colour
representation. This dependency shows that specific description techniques are more
suitable for specific object categories. From comprehensive analysis of both datasets,
MomentInvGPSO , OpponentSIFT and CLDTrans turned out to be a good
choice.
12.7.3 Object Identification
Object identification experiments have been performed according to the approach
presented in details in Sect. 12.6 . A classifier is trained to distinguish visual features
of an object of interest from features of other objects in the first (source) camera and
then it is used to find the same object in the second (destination) camera from among
a few candidates. The classification process is repeated for each object and for all
pairs of cameras it appears in; within each pair, two classifications are performed, as
each camera is treated as the source and the destination of the transition. The positive
training samples are formed by images of the object of interest S in a camera C 1 .
The positive validation samples are formed by images of the vehicle S in the camera
C 2 , where C 1 =
C 2 . All other objects that appear in cameras C 1 or C 2 are randomly
drawn to the negative training set and negative validation set, alternately, until one
of object pools is empty. Therefore both negative sets are equipotent or the negative
training set has one more element than the validation one. The drawing procedure
also assures that the negative training and validation samples do not contain images
of the same objects, therefore during re-identification negative validation samples
belong to objects whose images were not used for classifier training.
Table 12.10 presents details regarding quantity of objects in training and validation
sets and the amount of single identification tasks (for the given classifier and a feature
vector) that is equal to the number of object/transition pairs in the dataset. There is
always one positive sample (an object of interest S ) in positive sets and there are
approx. 3 persons or 5-6 vehicles in negative training and validation samples on
average during each classification. It means that a classifier is trained with more
 
 
Search WWH ::




Custom Search