Information Technology Reference
In-Depth Information
Finally we also detect and remove some impossible points. A likely scenario
is a point that is visible from a camera in scene 1 with a certain feature point
index, and visible from the same camera in scene 2 (offset by the overlap
O ) but with a different feature point index. This indicates that some views
of this point are incorrect. Another possibility is a triangular match, where
a point in scene 1 matches a point in scene 2 from one camera view, and
another point in scene 2 from another camera view. When this happens we
remove all involved points from the combined scene, as we can not be sure
which points and matches are correct.
4 Evaluation
4.1
Effective Range
Our proposed method is only effective starting from a certain size of the global
scene. Due to overhead incurred at the start of a bundle adjustment the method
of subdividing can be slower than running the bundle adjustment on the complete
scene when this scene is small. Furthermore, for small scenes the incremental
addition of images to the reconstructed scene is approximately linear in time, so
it makes no sense to split it. Figure 3 illustrates this, showing the time required
to reconstruct a scene of 629 images on an Intel Core 2 Duo T9300 CPU. The
final scene counts 479,212 points and took over 18 hours to compute. When less
than 200 images are included in the incremental optimization, the time required
to add more images is relatively small. However when the scene includes more
than 400 images, adding additional views and points takes a lot of time. The
reason is that the speed of the bundle adjustment is quadratic in the number
of 3D points in the scene, which is linked to the number of views. Starting the
Levenberg-Marquardt algorithm close to the optimum reduces the amount of
iterations, but does not reduce the time required for one iteration. As long as
the scene is small, the overhead from starting each bundle adjustment run will
be larger than the actual time spent optimizing, resulting in a linear behaviour.
By partitioning to sizes that fall inside this linear zone, our method has an
approximate complexity of O ( N ) instead of O ( N 2 ).
4.2
Influence of Subscene Size
S
As our method focuses on improving computation time, and does not deal with
the accuracy of the result, we want to minimize the difference between point
clouds generated by state of the art methods and our proposed method. The
logical choice of method to compare to is the output generated by the method
from [12]. Table 1 presents an evaluation of the influence of the partitioning
size S for a static overlap size O = 10. The columns presented are as follows.
First the time subset , in seconds, is the time required to optimize the slowest
(i.e. most dicult) subscene. This is the total running time if the reconstruction
would run completely parallel. Next is the time total , which is the optimization
 
Search WWH ::




Custom Search