Information Technology Reference
In-Depth Information
to highlight that although these frames (Fig. 1 (top) ) look quite similar there is a
considerable relative displacement between them.
A different scenario is shown in the two consecutive frames presented in
Fig. 5. In that scene, the car is reducing the speed to stop for a red light, three pedes-
trian are crossing the street. Although the vehicle is reducing the speed there is a
relative displacement between these consecutive frames (see Fig. 6
(
right
)
). The
synthesized view of frame
(
n
)
, using the computed 3D rigid displacement, is pre-
sented in Fig. 6
. Finally, the corresponding moving regions map is depicted in
Fig. 7. Bounding boxes enclosing moving objects can provide a reliable information
to select candidate windows to be used by a classification process (e.g., a pedestrian
classifier). In this case, the number of windows would greatly decrease compared to
other approaches in the literature, such as 10 8 windows in an exhaustive scan [20]
or 2,000 windows in a road uniform sampling [9].
(
le ft
)
5
Conclusions
This chapter presents a novel and robust approach for moving object detection by
registering consecutive clouds of 3D points obtained by an on-board stereo camera.
The registration process is only applied over two small sets of 3D points with known
correspondences by using key point features extraction and a RANSAC-like tech-
nique based on the closed-form solution provided by the unit quaternion method.
Then, a synthesized 3D scene is obtained after mapping the whole set of points
from the previous frame to the current one. Finally, a map of moving regions is gen-
erated by considering the difference between current 3D scene and the synthesized
one.
As future work more evolved approaches for combining registered frames will be
studied. For instance, instead of only using consecutive frames, temporal windows
including more frames are likely to help filtering out noisy areas. Furthermore, color
information of each pixel could be used during the estimation of the moving region
map.
Acknowledgment. This work was supported in part by the Spanish Ministry of Science and
Innovation under Projects TRA2010-21371-C03-01, TIN2010-18856 and Research Program
Consolider Ingenio 2010: MIPRCV (CSD2007-00018).
References
1. Amir, S., Barhoumi, W., Zagrouba, E.: A robust framework for joint back-
ground/foreground segmentation of complex video scenes filmed with freely moving
camera. Pattern Analysis and Applications 46(2), 175-205
2. Benjemaa, R., Schmitt, F.: A solution for the registration of multiple 3D point sets using
unit quaternions. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407,
pp. 34-50. Springer, Heidelberg (1998)
3. Besl, P., McKay, N.: A method for registration of 3D shapes. IEEE Trans. on Pattern
Analysis and Machine Intelligence 14(2), 239-256 (1988)
 
Search WWH ::




Custom Search