Graphics Reference
In-Depth Information
(a)
(b)
Figure 8.29. (a) 3D data is fundamentally represented as a point cloud. (b) The point cloud
inherits a mesh from the order of scanning.
a close-upof the LiDARdata fromFigure 8.3 , illustrates the problem. Insteadof having
a uniform grid of pixels with associated intensities, we have a nonuniform collection
of data points that all look the same. 20
However, the 3D point cloud is not totally unstructured; the way in which the data
is acquired usually imposes a mesh. For example, a LiDAR scan inherits a natural
triangulation based on connecting the measurements from adjacent
bins, as
illustrated in Figure 8.29 b. Usually we apply a heuristic to ensure that the triangles
don't span depth discontinuities; for example, we can remove mesh edges that are
longer than somemultiple of themedian edge length. Such a triangulation also allows
us to compute an estimate of the normal n
,
φ)
(
)
at each point p in the point cloud. The
easiest ways to compute the normal are to take the average normal of all the mesh
triangles that meet at the vertex, or to use the normal to a plane fit to the points in p 's
local neighborhood.
The two most common methods for feature description in this type of point cloud
data are spin images and shape contexts . Both methods are based on computing
histograms of points lying within 3D bins in the neighborhood of a selected point,
but differ in the structure of the bins.
Spin images , proposed by Johnson and Hebert [ 224 ], consider a cylindrical vol-
ume centered around the selected point, with the cylinder's axis aligned with the
point's estimated normal, as illustrated in Figure 8.30 a. The cylinder is partitioned
into uniformly spaced bins along the radial and normal directions, with a bin size
roughly equal to the distance between scan points. The number of bins is generally
chosen so that each model point falls in some bin. We then create an “image” h
p
(
i , j
)
th bin, where i corresponds to the radial
direction and j to the normal direction. Only entries that have similar normals to the
center point contribute to each histogram bin, to avoid contributions from points on
the other side of the model. Examples of spin images at various points on an example
mesh are illustrated in Figure 8.30 b-c.
If we observe the same 3D object in a different orientation, the spin images at cor-
responding points will agree, making theman attractive basis for feature description.
as the number of points falling in the
(
i , j
)
20 If an RGB camera image is also available, feature detection and matching is more reliable, as we
discuss shortly. While the intensity-of-return image from the LiDAR scanner could theoretically
be used for feature detection, this is rare in practice.
 
Search WWH ::




Custom Search