Digital Signal Processing Reference
In-Depth Information
14.2 Approach
We assume that plural cameras have already been set around the given intersec-
tion. Obviously, the more cameras there are, the better reference view could be
generated. In our work, evaluation was done by positioning six cameras at
uniform heights. The detailed arrangement is shown in Fig. 14.2 (left). Our
method does not restrict the detail position of cameras technically. Such a
symmetrical setting is used only for an easy explanation. Each camera and its
clockwise neighboring camera form a pair which are denoted as C n 0 and C n 1 .
Here, n is the number of the pair. Like most actual situations, cameras are not
calibrated in advance.
Our onboard system is supposed to receive image streams that are generated by
each roadside camera while approaching the intersection. The camera pair which
produces an orientation closest to the host vehicle is selected. The two images are
then prewarped to make their image planes become parallel without changing the
optical center of the cameras. Afterwards, we produce a novel view by linearly
interpolating the positions and color of the two prewarped images. The resulting
image is parallel with the prewarped two images, and it is shape-preserved.
The position of the perspective view is determined by the angle between the
vehicle's direction and directions of two selected cameras. Then, the images are
again warped to align with the host vehicle's direction. In this way, we generate a
view for the virtual camera as C s shown in Fig. 14.2 . After a zooming stage via
driver interaction, the final output of the system is the approximate view following
the host vehicle's motion.
Viewmorphing [ 3 ] is the inspiration for our method here. It could generate image
from any viewpoint by linking two original cameras together. Note that the original
method requires prior knowledge of the camera's projection matrices and excessive
reliance on manual operation. Our team broadened it by integrating robust funda-
mental matrix estimation and sparse key point matching. The following paragraph
further describes this method.
Fig. 14.2 Actual and virtual cameras
Search WWH ::




Custom Search