Graphics Reference
In-Depth Information
Chapter 5
Integrated Frameworks for Three-Dimensional
Scene Reconstruction
It has been shown in Chaps. 1 - 4 that the problem of three-dimensional scene re-
construction can be addressed with a variety of approaches. Triangulation-based
approaches (cf. Chap. 1 ) rely on correspondences of points or higher order features
between several images of a scene acquired either with a moving camera or with
several cameras from different viewpoints. These methods are accurate and do not
require a priori knowledge about the scene or the cameras used. On the contrary, as
long as the scene points are suitably distributed, they not only yield the scene struc-
ture but also the intrinsic and extrinsic camera parameters; i.e. they perform a cam-
era calibration simultaneously with the scene reconstruction. Triangulation-based
approaches, however, are restricted to parts of the scene with a sufficient amount
of texture to decide which part of a certain image belongs to which part of another
image. Occlusions may occur, such that corresponding points or features are hidden
in some images, the appearance of the objects may change from image to image
due to perspective distortions, and in the presence of objects with non-Lambertian
surface properties the observed pixel grey values may vary strongly from image to
image, such that establishing correspondences between images becomes inaccurate
or impossible.
Intensity-based approaches to three-dimensional scene reconstruction (cf.
Chap. 3 ) exploit the observed reflectance by determining the surface normal for
each image pixel. They are especially suited for textureless parts of the scene, but if
several images of the scene are available, it is also possible to separate texture from
shading effects. The drawbacks are that the reflectance properties of the regarded
surfaces must be known, the reconstructed scene structure may be ambiguous, es-
pecially with respect to its large-scale properties, and small systematic errors of the
estimated surface gradients may cumulate into large depth errors on large scales.
Point spread function (PSF)-based approaches (cf. Chap. 4 ) directly estimate the
depth of scene points based on several images acquired at different focus settings.
While the depth from focus method determines depth values based on the config-
uration of best focus, the problem of depth estimation reduces to an estimation of
the PSF difference between images for the depth from defocus method. Depth from
defocus can be easily applied and no a priori knowledge about the scene is required,
 
Search WWH ::




Custom Search