Digital Signal Processing Reference
In-Depth Information
Fig. 4.2 Sub-images with object undergoing tracking in frames #129, 130, 131, 149, 150, 151,
152, 153, 154, 155
Fig. 4.3 Input image ( a ), reference image ( b ), NCC-based probability image between the refer-
ence image and the input image ( c ), color ratios between reference and current image ( d ), and
image foreground ( e )
4.3.2 Foreground Prior
In multiple object tracking, the targets usually become completely or partially oc-
cluded. This results in the lack of evidence consisting in non-observability of an
occluded target in the image data. In PETS 2009 datasets, some occlusions by the
road sign (see images in Fig. 4.2 ) are relatively long-lasting. As a consequence, the
above presented tracker was unable to successfully track some targets in the whole
time span, i.e., from entering the scene until exiting the tracking area. Moreover, in
a few cases, after loosing the target, the tracker concentrated by mistake on some
background areas. In order to cope with such undesirable effects and to decrease the
probability of concentrating of the tracker on some non-target areas, we extended
the feature vector b i by a term expressing the object prior. The seventh element of
the extended feature vector expresses the object probability which is determined by
a foreground segmentation algorithm.
4.3.3 Foreground Segmentation
Our foreground segmentation algorithm is based on a color reference image, which
is foreground-free and is extracted automatically in advance, given a sequence of
Search WWH ::




Custom Search