Graphics Reference
In-Depth Information
step is required. The three-dimensional model according to Sect. 2.2.3.1 is adapted
to the motion-attributed three-dimensional point cloud based on the ICP-like optimi-
sation technique described in Sect. 2.3.3 . Prediction of the three-dimensional pose
to the next time step is performed using the ICP-based motion analysis technique
described in Sect. 2.3.3.2 .
In contrast to the evaluation of this method in Sect. 7.3 , which relies on the space-
time stereo approach of Schmidt et al. ( 2007 ) (cf. Sect. 1.5.2.5 ), in this section a
combination of dense optical flow information determined based on the method of
Wedel et al. ( 2008a ) and sparse stereo information obtained with the correlation-
based approach of Franke and Joos ( 2000 ) is used. The flow vectors are two dimen-
sional, such that, in contrast to a full scene flow computation as performed e.g. by
Huguet and Devernay ( 2007 ) or Wedel et al. ( 2011 ), the velocity component par-
allel to the depth axis is missing. We favour the direct combination of optical flow
and stereo due to its high computational efficiency. In principle, motion parallel to
the depth axis can be estimated using the stereo information of two consecutive
frames and the corresponding optical flow fields based on the disparity difference
between the two points connected by an optical flow vector. This information, how-
ever, turned out to be rather unreliable due to the disparity noise, which is of the
order of 0 . 2 pixel. Hence, motion along the depth axis will be recovered on the
object level using the SF technique (cf. configuration 5).
Configuration 4: Fusion of ICP and MOCCD This configuration corresponds
to the fusion approach described in Sect. 7.4.3 , where the three-dimensional pose is
predicted based on the ICP approach but without using the SF algorithm.
Configuration 5: Fusion of ICP, MOCCD, and SF This configuration corre-
sponds to the fusion approach described in Sect. 7.4.3 , where the SF algorithm is
used for predicting the three-dimensional pose.
7.4.5 Evaluation Results
The three-dimensional pose estimation accuracy of the described configurations
is quantified by the average Euclidean distances between the measured three-
dimensional positions of the reference points and the corresponding ground truth
data, along with the standard deviations obtained for each sequence. The average
Euclidean distances can largely be attributed to inaccurate modelling of the hand-
forearm limb as identical model parameters are used for all test persons, while the
standard deviations mainly result from the inaccuracy of the pose estimation itself.
The estimated temporal derivatives of the pose parameters are evaluated by deter-
mining the mean error and standard deviation of the error for the three velocity
components of each reference point, using the discrete temporal derivatives of the
ground truth positions as ground truth data for the velocity components.
According to Fig. 7.11 , the positional error of configuration 1 (MOCCD algo-
rithm) with respect to the average Euclidean distance between measured and ground
Search WWH ::




Custom Search