Digital Signal Processing Reference
In-Depth Information
images with moving targets. Afterwards we employ both region and pixel cues
which handle the illumination variations. In addition, we accommodate online the
reference image against the illumination and scene changes. The reference image is
extracted on the basis of the median of pixel values in some temporal widow. For
the 'S2L1_View_1' sequence, the number of images that were needed to extract
the foreground free images was equal to 40. Figure 4.3 (b) depicts the reference im-
age which was extracted using pixel intensities and the above mentioned number of
images.
The normalized cross-correlation NCC was used to extract brightness and con-
trast invariant similarity between the reference image and the current image. It was
computed very efficiently using integral images. The NCC was used to generate
the probability images between the reference images and the current image, see
Fig. 4.3 (c).
We construct an image of color ratios between the reference image and the cur-
rent image, where the value of each pixel at location x 1 is given by [ 4 ]:
arctan R x 1
R x 1
arctan G x 1
G r x 1
arctan B x 1
B x 1
T
(4.7)
where c and r denote the current and reference image, respectively, whereas R , G , B
stand for color components of the RGB color space. Such color ratios are indepen-
dent of the illumination, change in viewpoint, and object geometry. Figure 4.3 (d)
depicts an example image of color ratios. We can observe that for the pixels be-
longing to the background the color assumes gray values. This happens because the
color channels in the RGB color space are highly correlated. Moreover, the color ra-
tios are far smaller in comparison to the ratios between foreground and background.
However, as we might observe in the color ratio image there are noisy pixels. The
majority of such noisy pixels can be excluded from the image using the probability
images, extracted by the normalized cross-correlation.
In our algorithm, we compute online the reference image using the running me-
dian. Afterwards, given such an image, we compute the difference image. The dif-
ference image is then employed in a simple rule-based classifier, which extracts the
foreground objects and shadowed areas. In the classifier, we utilize also the probabil-
ity image extracted via normalized cross-correlation, as well as the color ratios. The
classifier makes decision if pixel is from the background, shadow, or foreground.
For shadowed pixels the normalized cross-correlation assumes values near to one.
The output of the classifier is the enhanced object probability image. Optionally, in
the final stage, we employ the graph-cut optimization algorithm [ 7 ] in order to fill
small holes in the foreground objects.
4.3.4 Re-diversification of the Swarm
At the beginning of each frame, in some surrounding of the swarm's best location
g t the algorithm selects possible object candidates. Such object candidates are de-
lineated using the foreground blobs. A simple heuristics, which is based on blob
Search WWH ::




Custom Search