Graphics Reference
In-Depth Information
the PSF much better than the relation inferred from geometric optics. A general
property of the depth from defocus approach is that it yields dense but fairly inac-
curate and noisy depth maps. It has been demonstrated analytically that depth from
defocus should be preferentially utilised in close-range scenarios. A further class of
PSF-based methods, depth from focus techniques, search for the point of best focus
by moving the camera or the object and are thus accurate but slow.
The described classes of three-dimensional reconstruction methods all have their
specific advantages and drawbacks. Some of the techniques have complementary
properties—triangulation-based methods determine three-dimensional point clouds
describing textured parts of the scene, while intensity-based methods may be able to
reconstruct textureless regions. Hence, it is favourable for computer vision systems
to integrate different three-dimensional scene reconstruction methods into a unify-
ing framework. The first described integrated approach combines structure from mo-
tion and depth from defocus and yields a three-dimensional point cloud of the scene
along with the absolute scaling factor without the need for a priori knowledge about
the scene or the camera motion. Several quantities that influence the accuracy of this
approach, such as pixel noise, the nonlinearity of the depth-defocus function, and
temperature effects, are discussed. Another integrated approach combines shadow
and shading features for three-dimensional surface reconstruction, alleviating the
ambiguity of the shape from shading solution. The shape from photopolarimetric
reflectance and depth method integrates photopolarimetric information with depth
information that can in principle be obtained from arbitrary sources. In this context,
depth from defocus information can be favourably used to determine the large-scale
properties of the surface, to appropriately initialise the surface gradients, and to es-
timate the surface albedo. Sparse depth information is incorporated by transforming
it into dense depth difference information, such that the three-dimensional recon-
struction accuracy is significantly increased, especially on large scales. The shape
from photopolarimetric reflectance and depth method has been extended to an iter-
ative scheme for stereo image analysis of non-Lambertian surfaces. This approach
overcomes the general drawback of classical stereo approaches, which implicitly
assume a Lambertian surface reflectance when establishing point correspondences
between images. Disparity estimation is performed based on a comparison between
the observation and the surface model, leading to a refined disparity map with a
strongly reduced number of outliers. Furthermore, the combination of active range
scanning data with photometric image information has been demonstrated to per-
form a three-dimensional surface reconstruction at high lateral and depth resolution.
Another integrated approach has been introduced to address the problem of monoc-
ular three-dimensional pose refinement of rigid objects based on photopolarimetric,
edge, and depth from defocus information. It has been demonstrated that the com-
bination of various monocular cues allows one to determine all six pose parameters
of a rigid object at high accuracy.
The second part of this work has addressed several scenarios in which three-
dimensional computer vision methods are favourably applied. The first regarded ap-
plication scenario is quality inspection of industrial parts. For the three-dimensional
pose estimation of rigid parts, the proposed combined methods have turned out to
Search WWH ::




Custom Search