Image Processing Reference
In-Depth Information
Fig. 3.3 Representation of the texture ( left ) and associated depth-map ( right ) for camera 3 of the
Ballet sequence
Fig. 3.4 MVD system based on DIBR for view synthesis in the decoder [ 11 ]
MVD uses a fixed number of texture views together with the associated depth
information. Depth-image-based rendering (DIBR) techniques are then used in the
decoder side, to combine the information of the transmitted views and their
associated depth-maps, in order to synthesize the intermediate views required by
the display. This process is represented in Fig. 3.4 . The limited number of encoded
views reduces the bit-rate required to transmit the 3DV signal, while the use of
DIBR allows the generation of a large number of views in the decoder.
The depth information, z , is available for 3D synthetic sequences, which are
based in a geometric model [ 11 ]. For natural scenes, the depth value can be
determined based on the disparity, d, and the information about the geometry of
the acquisition system (namely the baseline distance for the cameras, b, and the
focal length, f ), by: z
b / d . The accuracy of the depth value depends on the
disparity estimated from the texture views. A large number of disparity estimation
methods have been proposed in the literature (e.g., [ 13 - 15 ]). In spite of the
improvements in the accuracy of these methods, errors in the determined depth
values are still common, especially for totally of partially occluded areas of the 3D
scene. These errors limit the applicability of the estimated depth-maps, due to the
impact of depth errors in the view synthesis process [ 11 ].
In recent years, depth sensors, such as time-of-flight cameras have entered the
consumer market. These sensors are currently only capable of acquiring
low-resolution depth maps, which are usually enhanced by post-processing
methods based on interpolation and denoising filters. Furthermore, these sensors
ΒΌ
f
Search WWH ::




Custom Search