Digital Signal Processing Reference
In-Depth Information
Fig. 14.3 Experimental results
14.5 Experimental Result
Our evaluation experiment is done by using an intersection model at 1:38 scale. We
use six cameras with resolution 640 by 480. The camera setting is approximately
the one shown in Fig. 14.2 (left). Remote toy cars and bikes are used to obtain test
image sequences. Figure 14.3 (top left and middle) shows a pair of our sample input
from the left and right cameras respectively.
First of all, the fundamental matrices are estimated in the way mentioned in
Sect. 14.3.1 . The prewarping transformations H 0 and H 1 are then calculated based
on F. We take a pair of cameras' images as examples as shown in Fig. 14.3 (top left
and middle). By jointly using SIFT and Harris detectors, about two thousand key
points were selected in each image, and the distribution is normalized as shown in
Fig. 14.3 (top right, green: SIFT, red: Harris). Using the matching criterion in
Sect. 14.4.1 and followed with a manually operated refining step, two hundred and
ten features were finally selected as correspondence. We then make the projective
transformations on the two images (Fig. 14.3 bottom left and middle) as well as the
matching points' coordinates. Without automatic estimation of vehicle's direction,
we then produce a reference image by manually assigned morphing rate s and the
camera tilt angle
g
. The resulting image is shown in Fig. 14.3 (bottom right). Even
though the resulting image contains some ghost effect, it is evident that the
proposed method works well.
14.6 Conclusion
In this chapter, we proposed a method to generate the reference view of traffic
intersection for safe driving assistance. We adapted the view morphing approach
and broadened it using robust fundamental matrix estimation and automatic feature
Search WWH ::




Custom Search