Graphics Reference
In-Depth Information
light projected onto his or her face. An up-and-coming alternative from traditional
computer vision is the passive technique of multi-view stereo (MVS) . Multi-view
stereo algorithms combine the natural images from a large set of calibrated cameras
withdense correspondence estimation to create a 3Ddataset, typically represented as
a texture-mappedmesh or a set of colored voxels (Section 8.3 ). WhileMVS techniques
are about an order of magnitude less accurate than active lighting methods, they can
still produce convincing, high-resolution 3D data.
Finally, we discuss common algorithms required for registering 3D datasets, since
several scans from different locations may be required to see all sides of an object
to build a complete model (Section 8.4 ). As in the 2D case, we detect, describe, and
match features, and use these as the basis for automatically registering two scans of
the same scene from different perspectives. We then address the fusion of a large
number of scans into a single coordinate system and data representation.
8.1
LIGHT DETECTION AND RANGING (LIDAR)
We can think about a LiDAR scanner 2 as an advanced version of the “laser measuring
tape” that can be found in a hardware store. The basic principles are similar: a laser
pulse or beam is emitted froma device, reflects off a point in the scene, and returns to
the device. The time of flight of the pulse or the phasemodulation of the beam is used
to recover the distance to the object, based on a computation involving the speed of
light. While the hardware store laser measuring tape requires the user to manually
orient the laser beam, a LiDAR scanner contains a motor and rapidly spinning mirror
that work together to sweep the laser spot across the scene in a grid pattern.
Figure 8.1 depicts two 3D scanners based on the main methodologies for LIDAR
data acquisition. The first scanner, in Figure 8.1 a, uses a time-of-flight-based sen-
sor and can measure distances of hundreds of meters, while the second scanner, in
Figure 8.1 b, is a phase-based system with a maximum range of about eighty meters.
Despite the long distances involved, both types of scanners are accurate to within
a few millimeters. An added advantage is that the distance to each point is mea-
sured directly, as opposed to inferred using a vision-based method like multi-view
stereo. For these reasons, laser scanning is considered the gold standard for 3D data
acquisition. We'll discuss the physical principles behind both scanners shortly.
As illustrated in Figure 8.2 , LiDARdata is usually collected in a spherical coordinate
system. For every azimuth and elevation angle
φ)
,
, the scanner returns a distance
d
, measured in physical units like meters, to the first point in the scene encoun-
tered along the specified ray. 3 For given intervals of
,
φ)
values can
be interpreted as a range or depth image , which can be manipulated using standard
image processing algorithms . 4 Well before their application to visual effects, LiDAR
θ
and
φ
, the d
,
φ)
2 In military applications, the acronym LADAR (LAser Detection And Ranging) is often used instead.
3 Some LiDAR scanners report multiple distance returns per ray, which can occur due to transparent,
reflective, or quickly moving surfaces in the scene.
4 Scanners also frequently report the return intensity at each ray, which is related to the reflectance,
material properties, and orientation of the corresponding surface. This return intensity image can
also be processed like a normal digital image.
 
Search WWH ::




Custom Search