Graphics Reference
In-Depth Information
use of ICP for registering LiDAR data with a point cloud generated using multi-view
stereo on a video sequence.
Finally, we note that variations of ICP can handle the problem of registering a tex-
tured LiDAR scan to a camera image taken at a substantially different viewpoint. For
example, Yang et al. [ 561 ] proposed an algorithm that begins by applying 2D ICP to
the camera image and the scanner's co-located RGB image, and then upgrades the
problem to a 2D-3D registration when the correspondences are no longer well mod-
eled by a projective transformation. We can think of this as a resectioning problem
(Section 6.3.1 ) in which the 2D-3D correspondences are iteratively discovered.
8.4.3
Multiscan Fusion
After registration, we have two or more scans in the same coordinate system. While
each set of 3D points may have its own triangular mesh inherited from the scanning
process, we now address the problem of obtaining a single, uniformly sampled mesh
representing the entire collection of registered data.
One problem is that in areas where two scans overlap, we have a large number of
redundant points and triangles, as illustrated in Figure 8.36 a. Turk and Levoy [ 501 ]
introduced an early mesh zippering approach to address this issue. First, triangles at
the edge of each mesh are iteratively removed until the meshes only slightly overlap
(Figure 8.36 b), andnew triangles are introduced to bridge the two scans (Figure 8.36 c-
d). Each vertex of the mesh is then allowed to move along its estimated normal to
minimize its average distance to the original meshes.
One of the most commonly used scan fusion algorithms is the Volumetric Range
Image Processing (VRIP) algorithm proposed by Curless and Levoy [ 112 ]. This
method is volumetric, dividing the environment into equally sized voxels and com-
puting the value of a function f :
3
at each voxel. The final merged surface S is
implicitly defined by those points that satisfy f
R
→ R
(
S
) =
0, i.e., the zero-level set.
for each voxel X . Each of M regis-
tered range scans is assumed to be triangulated, and the i th scan is associated with a
signed distance function d i (
We first describe how to form the function f
(
X
)
X
)
and a weight function w i (
X
)
.
is computed with respect to lines of sight from
scanner i . Points on the triangular mesh have d i (
The signed distance function d i (
X
)
X
) =
0, points in front of the mesh
(a)
(b)
(c)
(d)
Figure 8.36. (a) Many redundant points and triangles exist where two registered 3D scans over-
lap. (b) Overlapping triangles from the edges of the black mesh are removed. (c) New points are
introduced at intersections between the black and gray meshes. Shaded parts of the black mesh
will be removed. (d) A new triangulation is formed.
Search WWH ::




Custom Search