Information Technology Reference
In-Depth Information
configuration whatever the chosen depth distortion settings and viewing device if we
crop the needed capture area CA i in each digital photo in order to comply with the
needed shooting geometry. With this photo rail, there is a possibility of distortion
due to the digital camera, but distortions will be consistent for all images and will
be of negligible magnitude, as it is professional equipment. We have not tried to
correct those possible distortions but such a task could be done easily.
Thanks to the previous shooting design method, we know how to create a camera
system containing several couples of lenses and image sensors in order to produce
simultaneously the multiple images required by an autostereoscopic display with a
desired depth effect. As these couples are multiple, their induced distortions can be
different. We have introduced a couple-by-couple process of calibration/correction
based upon the model by Zhang [23]. We have already produced two prototypes
of camera system delivering multi-video stream in real-time (25 Hz). Their layout
parameters have been defined for no distortion of specific scenes (see below) and set
at manufacturing. The first camera system (Fig. 7(b)) allows to shoot a life size scene
(ratio k i = 1) of the bust of a person to be viewed on an autostereoscopic parallax
display 57” (optimal viewing distance 4 . 2 m). The second camera system (Fig. 7(c))
enables to shoot small size objects (in the order of 10-20 cm) and to display them
on an autostereoscopic lenticular display 24” (optimal viewing distance 2 . 8 m) with
an enlargement factor set to k i = 1 , 85. According to numerous viewers both novice
and expert, the 3D perception is really good.
For example, Figure 8 illustrates the shooting of a room in “Musee Automobile
Reims Champagne” [24] in Reims with a perfect depth effect for autostereoscopic
parallax display 57” (optimal viewing distance 4 . 2 m). We made a 3D shooting of a
big hall with a significant depth 1 .
5.3
Combination of Real and Virtual 3D Scenes
The work reported in this chapter is included in an overall project, which carries the
combination of real and virtual 3D scenes. Then, one speaks about 3D augmented
reality. This could be applied to autostereoscopic displays in a straightforward way
by adding virtual objects on each image. However it is much more interesting to use
the depth information of the real scene so that virtual objects could be hidden by
real ones. To that end, it is necessary to obtain one depth map for each view. The
particular context of images destined to autostereoscopic displays allows working
on a simplified geometry: no rectification is needed, epipolar couples are horizontal
lines of same rank and disparity vectors are thus aligned along the abscissa. The aim
is to obtain a good estimation of depth in any kind of scene, without making any
assumption about its content. In our project, Niquin et al. [25] have been working
on this subject and have presented their first results on accurate multi-view depth
reconstruction with occlusions handling. They have worked on a new approach to
handle occlusions in stereovision algorithms in the multiview context using images
destined to autostereoscopic displays. It takes advantage of information from all
views and ensures the consistency of their disparity maps. For example, Figure 9
 
Search WWH ::




Custom Search