Information Technology Reference
In-Depth Information
camera system and its output images. Notably, each camera is connected to a per-
sonal computer equipped with a video capturing board, and a clock generator is
linked to the cameras to provide synchronization signals constantly. Table 1 shows
the specification of the hybrid system.
Table 1 Specification of the hybrid camera system
Device
Specification
Detail
Stereo Camera
Output Format
NTSC or PAL(16:9 ratio, High Definition)
Depth Range
0.5 to 7.0m (In practice, 1.0 to 4.0m)
Field of View
40 degrees
Depth Camera
NTSC or PAL(4:3 ratio, Standard Defini-
tion)
Output Format
Sync. Generator
Output Signal
SD/HD Video Generation
In the hybrid camera system, we capture four synchronized 2D images in each
frame: left and right images from the stereoscopic camera, and color and depth
images from the depth camera. In order to clearly explain the methodology, we
define image terminologies used in the rest of this chapter as follows.
Left image: a color image captured by the left camera.
Right image: a color image captured by the right camera.
Color image: a color image captured by the depth camera.
Depth image: a depth image captured by the depth camera.
ROI depth image: a spatially-extended depth image from a depth image
captured by the depth camera.
ROI enhanced depth image: the final depth image combining a ROI depth
image and its background depth image.
Color and depth images naturally have the same resolution (720×480) as the depth
camera. On the other hand, the other images have the same resolution
(1920×1080) as the HD stereoscopic cameras.
Since we are employing two different types of cameras to construct the hybrid
camera system, it is necessary to calculate relative camera information using cam-
era calibration. In order to carry out relative camera calibration, we measure the
projection matrices P s , P l , and P r of the depth, left and right cameras induced by
their camera intrinsic matrices K s , K l , and K r , rotation matrices R s , R l , and R r , and
transition matrices t s , t l , and t r , respectively. Then, the left and right images are
rectified by rectification matrices induced by the changed camera intrinsic matri-
ces K l ′ and K r ′, the changed rotation matrices R l ′ and R r ′, and the changed transi-
tion matrices t r ′. Thereafter, we convert rotation matrix R s and the transition matrix
t s of the depth camera into the identity matrix I the zero matrix O by multiplying
inverse rotation matrix R s -1 and subtracting the transition matrix itself. Hence, we
Search WWH ::




Custom Search