Digital Signal Processing Reference
In-Depth Information
2.2.1 CameraTechnologies
Although post-processing can be performed to correct issues that arise
during video capture, selecting the right cameras can reduce the amount of
post-processing required. This in turn results in better quality 3D content.
There is a limited selection of cameras manufactured specifically for 3D
content capture. Broadcast quality 3D cameras are even more difficult to find.
As a result, much 3D video has been produced using standard 2D cameras,
mounted on special rigs. These cameras must be precisely synchronized and
calibrated to ensure accurate 3D rendering for the view, without causing
visual fatigue.
However, recently a number of consumer electronics manufacturers have
introduced 3D-capable camcorders to their project ranges. Panasonic, for
example, have introduced a new lens for their camcorders, which is com-
patible with some of their existing 2D camcorders. The lens projects the two
stereoscopic views onto a single sensor, which means that the camcorder cap-
tures the video in side-by-side format. This is convenient for many consumer
3D displays, which accept side-by-side video as an input. The problem with
this arrangement is that the resolution is compromised. Sony and JVC have
both introduced competing products, which feature two separate sensors,
enabling High Definition capture of stereoscopic video. To make it possible
for consumers to capture 3D video easily, the camcorders try to estimate and
vary the amount of disparity between the two views, depending on the scene
content and the focal length used. Although they are quite effective products
for consumers, they do not allow the amount of control over capture needed
for professional quality 3D filming.
2.2.1.1 Key Requirements for Standard 2D Cameras
Before considering the requirements for the cameras, it is important to know
what will be done with the captured video. For 3D capture, it may be
necessary to use the 2D video to estimate depth (see Section 2.3.2), compress
multiple camera views jointly, and to use the video to render intermediate
viewpoints. Let us consider what each of these implies:
Estimation of depth - the cameras should have high resolution and good
quality.
Joint compression of multiple views - the captured video views should be
similar in terms of brightness, contrast, and colour saturation, so that the
difference between views is minimized. This provides better compression
efficiency when compressed with MVC (see Section 3.3.2).
Rendering intermediate viewpoints - as above, the video should be similar
in terms of brightness, contrast, and colour saturation. This will prevent
noticeable changes in the picture when the viewpoint is changed.
Search WWH ::




Custom Search