Graphics Reference
In-Depth Information
spondingly defined error function. The observed data are compared to their rendered
counterparts, where an accurate rendering of intensity and polarisation images is
performed based on the material-specific reflectance functions determined with a
goniometer. If a certain cue cannot be reliably measured or does not yield useful
information, it can be neglected in the optimisation procedure. Beyond depth from
defocus, in principle our pose estimation framework is open for depth data obtained
e.g. by active range measurement. The inferred pose refinement accuracy is com-
parable to or higher than that of the monocular template matching approach by von
Bank et al. ( 2003 ) analysed in Sect. 6.1.1 , which exclusively relies on edge informa-
tion. This result is achieved despite the fact that our method additionally provides
an estimate of the distance to the object, while the method of von Bank et al. ( 2003 )
assumes that the object distance is known.
The depth from defocus method has turned out to be a useful instrument for the
estimation of object depth in close range, at an accuracy of about 1 %. We have
demonstrated the usefulness of our method under conditions typically encountered
in industrial quality inspection scenarios such as the assembly of complex parts.
Here, the desired pose of the whole workpiece or part of it is given by the CAD
data, and the inspection system has to detect small differences between the actual
and the desired pose.
Comparison with Other Pose Refinement Methods It is again interesting to
compare the accuracy of our method with that achieved by other pose refinement
approaches. For their proposed monocular pose refinement method (cf. Sect. 2.1.1 ),
Kölzow and Ellenrieder ( 2003 ) determine the absolute accuracy of the pose parame-
ters based on synthetic images of the oil cap also regarded by von Bank et al. ( 2003 ),
where the background is uniform (the given image dimensions and the known object
size allow one to estimate an approximate pixel scale of 0 . 7 mm per pixel). Accord-
ingly, the mean rotational accuracy of the method by Kölzow and Ellenrieder ( 2003 )
is better than 1 , the mean translational accuracy is better than 1 mm parallel to the
image plane, and the mean depth accuracy corresponds to 2 . 6 mm. The standard
deviation, indicating the uncertainty of a single measurement, is better than 2 for
the rotation, better than 1 mm for the translation parallel to the image plane, and
about 4 mm for the depth. Regarding real images of the oil cap with a complex
background, the standard deviations of the estimated pose parameters across subse-
quent images of a sequence are comparable to the standard deviations obtained for
the synthetic images.
The monocular system of Nomura et al. ( 1996 ) (cf. Sect. 2.1.1 ) uses synthetic
edge and intensity images generated based on an object model. At an image res-
olution of about 2 mm per pixel, the rotational accuracy is better than 0 . 5 ,the
translational accuracy parallel to the image plane corresponds to 0 . 04 mm, and the
accuracy of the estimated depth amounts to 5 . 19 mm.
The system of Yoon et al. ( 2003 ) (cf. Sect. 2.1.1 ) performs a pose estimation of
industrial parts in real time in stereo images. The objects are located at distances of
600-800 mm, and a resolution of approximately 0 . 9 mm per pixel can be inferred
from the presented example images. Yoon et al. ( 2003 ) report an accuracy of better
Search WWH ::




Custom Search