Graphics Reference
In-Depth Information
2.2.3.3 The Multiocular Extension of the CCD Algorithm
The multiocular extension of the CCD algorithm relies on the projection of the
boundary of a three-dimensional contour model into each image. The intrinsic and
extrinsic parameters of the camera model (Bouguet, 2007 ) are obtained by multi-
ocular camera calibration (Krüger et al., 2004 ). An arbitrary number of images N c
can be used for this projection. The input values of the MOCCD algorithm are N c
images and the Gaussian a priori distribution p( T )
m T , Σ T ) of the model
parameters T , which define the three-dimensional object model. To achieve a more
robust segmentation, the input image I c,t of the MOCCD algorithm is computed
using ( 2.11 ) and the original camera images I c,t with c
p( T
|
∈{
1 ,...,N c }
at the time
steps t and (t
1 ) . Before the first iteration, the MOCCD algorithm is initialised
by setting the mean vector and covariance matrix ( m T T ) to be optimised to the
given a priori density parameters (
m T , Σ T ) . The MOCCD algorithm then consists
of three steps.
Step 1: Extraction and Projection of the Three-Dimensional Model The in-
trinsic and extrinsic camera parameters are used to project the extracted outline of
the three-dimensional model to each camera image I c . The MOCCD algorithm ex-
tends the CCD algorithm to multiple calibrated cameras by projecting the boundary
of a three-dimensional model into each camera image I c . Therefore, the MOCCD
algorithm requires the extraction and projection of the outline of the used three-
dimensional model.
A three-dimensional hand-forearm model (cf. Sect. 2.2.3.1 ) is fitted to the im-
ages of a trinocular camera. The outline of our three-dimensional model in each
camera coordinate system is extracted by computing a vector from the origin of
each camera coordinate system to th e poin t in the wrist, e.g. C 1 p 2 for camera 1.
This vector and the direction vector p 1 p 2 of the forearm span a plane. The nor-
mal vector of this plane is intersected with the three-dimensional model to yield
the three-dimensional outline observed from the camera viewpoint. The extracted
three-dimensional contour model for the given camera, which consists of 13 points,
is projected into the pixel coordinate system of the camera. The corresponding two-
dimensional contour model is computed by an Akima interpolation (Akima, 1970 )
along the curve with the 13 projected points as control points. Figure 2.10 depicts
the extraction and projection of the three-dimensional contour model for camera 1.
Step 2: Learning Local Probability Distributions from all N c Images For all
N c camera images I c compute the local probability distributions S c ( m T T ) on
both sides of the curve. This step is similar to step 1 of the CCD algorithm; the only
difference is that the probability distributions S c ( m T T ) on both sides of the curve
are learned for all N c camera images I c .
Step 3: Refinement of the Estimate (MAP Estimation) The curve density pa-
rameters ( m T T ) are refined towards the maximum of ( 2.30 ) by performing a
Search WWH ::




Custom Search