Graphics Reference
In-Depth Information
We obtain a photorealistic image R I (p uv ,q uv ) which can be compared with the
input image I uv , resulting in the intensity error term
I uv
R I (p uv ,q uv ) 2 .
e I =
(5.43)
u,v
The summation is carried out for the rendered pixels representing the object surface.
A disadvantage of the technique proposed by Decaudin ( 1996 ) is the fact that no
shadow information is generated for the scene. Hence, shadows are computed in a
further ray tracing step after the photorealistic rendering process.
Furthermore, we introduce an analogous error term e Φ taking into account the
polarisation angle Φ uv of the light reflected from the object surface. We utilise
the polarisation reflectance function R Φ (p uv ,q uv ) according to ( 3.60 ) as defined
in Sect. 3.4.2 with empirically determined parameters. The renderer then predicts
the polarisation angle for each pixel, resulting in the error term
Φ uv R Φ (p uv ,q uv ) 2 .
e Φ =
(5.44)
u,v
In principle, a further error term denoting the polarisation degree might be intro-
duced at this point. However, in all our experiments we found that the polarisation
degree is an unreliable feature with respect to three-dimensional pose estimation
of objects with realistic surfaces, as it depends more strongly on small-scale vari-
ations of the microscale roughness of the surface than on the surface orientation
itself.
5.6.2 Edge Information
To obtain information about edges in the image, we compute a binarised edge image
from the observed intensity image using the Canny edge detector (Canny, 1986 ). In a
second step, a distance transform image C uv is obtained by computing the chamfer
distance for each pixel (Gavrila and Philomin, 1999 ). As our approach compares
synthetically generated images with the observed image, we use a modified chamfer
matching technique. The edges in the rendered image are extracted with a Sobel
edge detector, resulting in a Sobel magnitude image E uv , which is not binarised.
To obtain an error term which gives information about the quality of the match, a
pixel-wise multiplication of C uv by E uv is performed. The advantage of omitting the
binarisation is the continuous behaviour of the resulting error function with respect
to the pose parameters, which is a favourable property regarding the optimisation
stage. If the edge image extracted from the rendered image were binarised, the error
function would become discontinuous, making the optimisation task more difficult.
Accordingly, the edge error term e E is defined as
e E =−
C uv E uv ,
(5.45)
u,v
Search WWH ::




Custom Search