Digital Signal Processing Reference
In-Depth Information
comfort zone, which is usually measured by the ratio of the 3D screen's
width to the average distance of the viewer from the screen. The challenge
is to adjust these two parameters, such that all video objects within the
reconstructed 3D visualization fit within the comfortable depth limits with
respect to the screen, and yet look realistic as close as possible to the actual
scene that is shot.
While the inter-axial distance between the cameras usually contributes to
the overall disparity between the shot stereoscopic video pair, hence the
global depth, the amount of introduced convergence changes the relative
depth positioning of each video object with respect to the screen plane. The
improper adjustment of the camera convergence usually results in binocular
''keystone''-type distortion that is known to be caused by attempting to
project the video onto the screen plane at a certain angle. This distortion
increases with an increase in the distance between the object and the camera,
decreasing convergence distance and decreasing focal distance [4]. Further-
more, if the convergence between the cameras is not properly adjusted,
the viewer can perceive wrong distances between 3D video objects' sur-
faces and the screen plane, where their shapes are corrupt and changed
from the natural representation [4]. Accordingly, the projected scene is cut
into only a sparse set of depth planes and most 3D video objects appear
unnaturally flat.
Apart from the afore-mentioned settings, there are some other types of
well-known distortions/artefacts (not necessarily connected to stereoscopic
video acquisition systems), unless the stereoscopic video capture system
is well adjusted. These include: blurriness in either of the pairs (by the
loss of focus in either lenses), barrel distortions (through decreased mag-
nification with increasing distance from the optical axis), spatial aliasing
(by insufficient sampling of the acquired 3D video signal), motion blur
(appears on fast-moving objects where the camera is unable to map them
sharply on to the sensor), unbalanced photometry between both cameras
(mismatches in colour saturation levels, differences in brightness, contrast,
and gamma values).
Having explained the types of commonly observed stereoscopic video
artefacts and their causes, one can appreciate that if the rigging and the
calibration of the stereoscopic camera pair are done accurately (i.e. accurate
matching of electronic and optical camera parameters and the adjustment of
the inter-camera baseline according to the depth structure of the captured
scene), the captured stereoscopic video content can be as free-of-artefacts
as possible. However, note that in order to bring the captured content up
to entertainment and broadcast quality, usually a set of post-production
tools need to be applied. More details on the post-production tools will be
provided in Section 2.3.
Search WWH ::




Custom Search