Digital Signal Processing Reference
In-Depth Information
the two stereoscopic video frames [3]. In order to identify robust matches,
to estimate the position of both cameras, and to compute the parameters
for the correction of camera misalignments, and keystone distortions, the
constraints of epipolar geometry are taken into consideration.
Similarly, photometric parameters are analysed to detect any mismatches.
Accordingly, all computed geometric and photometric correction parame-
ters are either stored as a metadata file for offline post-production purposes,
or are directly applied in the adjustment of the stereoscopic camera rig in
real time. These adjustments involve steering the lens control, changing the
electronic camera settings and camera positioning in the case of motorized
lenses and rig, and interfacing with the camera signal processing. In addi-
tion to these processes, the developed stereoscopic video analysis tool also
facilitates an intuitive graphical user interface and is able to show the cam-
eraman and the stereograph the histogram of the current disparity levels in
the shot 3D video, and thus the current clipping depth planes of the scene.
The cameraman is free to adjust the convergence and the inter-axial distance
between the cameras accordingly. However, the analysis tool is itself able
to compute the optimum inter-axial distance, once it has fixed the other
rigging parameters and cameras' focal lengths, based on the constraints of
achieving the 3D video visualization within the comfortable viewing zone.
Interested readers can refer to [3] and to [7] in order to obtain more infor-
mation about the working principles of the described real-time stereoscopic
video analyser tool.
2.2.2.3 2D-to-3D Conversion for Stereoscopic
Video Generation
After the spread of the three-dimensional visualization systems, another
core research area has been converting the existing (previously recorded)
programs into stereoscopic 3D. The term 2D-to-3D conversion refers to using
monocular depth cues obtained from a 2D video sequence to generate the
equivalent 3D video sequence.
The 2D-to-3D video conversion has been motivated mostly by two types
of scenarios. The main one is to convert the existing 2D video archives into
a new 3D version of it. The other motivation is about shooting/creating
a completely original 3D video that is shot in 2D (e.g. using a single
video camera). This could be due to the fact that the cost of maintenance
and operation of a 3D capturing system would be more than that of a
well-established 2D capturing systems. The conversion process primarily
necessitates the segmentation of the 2D image sequence. For each segmented
video object, the relative depth is computed using two-dimensional visual
cues. It involves locating the occlusion areas then (i.e. the video regions
which are visible to one of the cameras, but is not visible to the other camera)
and concealing them using the appropriate pixel interpolation techniques
and using the texture information of the surrounding segments.
Search WWH ::




Custom Search