Graphics Reference
In-Depth Information
moderate level of invariance to illumination changes and perspective distortion;
availability of point matching algorithms based on extracted descriptors, facilitat-
ing automatic camera pose initialization.
These advantageous properties drove the adoption of point-based tracking algorithms
in a wide range of AR scenarios.
However, point-based approaches do not perform well with sparsely textured
or specular objects. Further, a high level of illumination changes could lead to
the inability to correctly establish correspondences between previously mapped fea-
ture points and currently observed points. Moreover, any significant change in texture
will also lead to failed correspondence search. The reason for this is that the feature
point description most often relies exclusively on surrounding texture in the image.
In order to overcome these problems, edge-based approaches can be utilized for
camera pose initialization and tracking. The benefit of using edges is their stability
in the image with regard to illumination changes, existence of specular reflections,
and texture changes, given the assumption that edges originating from textures are
used for neither initialization nor tracking.
Next, we provide details on edge-based camera pose initialization use case. A
goal of this algorithm is to establish a pose of the camera with regard to a known
3D object. It is required to provide an accurate edge model of the 3D object, which
contains the most distinctive 3D lines of the object that are visible in the images.
Further, it is also necessary to provide a surface model of the 3D object. Due to
the low level of information contained in simple 3D lines, as opposed to the rich
descriptions of texture available in the state of the art, it is often necessary to provide
an approximate initial pose of the camera. For e.g., in the outdoor case, the initial
pose can be computed using the GPS, compass and gyroscope, assuming availability
of these sensors. Once the camera pose is successfully initialized, it is possible
to proceed with camera pose tracking, with regard to the 3D object. Camera pose
tracking can be performed using only edges, only feature points, or combining both
information cues in a hybrid approach. Further, tracking can be performed on a
single-frame basis, with or without bundle adjustment of camera poses, and mapped
edge or point features. Alternatively, it can be performed in a filter-based SLAM
framework.
The main challenge inherent to state-of-the-art edge-based approaches is the
camera pose limited convergence basin. This implies that to correct camera pose
can be computed only when the initial pose is similar enough. The maximal
pose difference, allowing for accurate camera pose initialization depends mostly
on the 3D model appearance characteristics, the edge-model accuracy, and “dis-
tinctiveness”. This leads to another disadvantage of edge-based approaches, which
is strict edge-model requirements. First, extracted edges should be visible in the
image in a wide range of illumination intensities, and also from a wide range of
viewpoints. Further, since the majority of edge-based approaches determine corre-
spondences between edges based on the image gradient magnitude and orientation,
extracted edges should preferably be unique with regard to their spatial surroundings.
In most cases, an edge-model is extracted using the provided surface model, which
Search WWH ::




Custom Search