Graphics Reference
In-Depth Information
1
2
3
Figure 8.23. The three phase-shifted images used in the fringe projection algorithm.
Huang et al. [ 206 ] described the basic approach of projecting three sinusoidal
images with frequency
separated by 2 3 in phase:
ω
1
cos
3
1
2
2
I R
(
x , y
) =
+
ω
x
1
2 (
(8.6)
I G
(
x , y
) =
1
+
cos
x
))
1
cos
1
2
2
3
I B (
x , y
) =
+
ω
x
+
The images have been scaled in intensity to take up the full
[
0, 1
]
range, and are
illustrated in Figure 8.23 .
Huang et al. made the clever observation that these fringe patterns could be pro-
jected at extremely high speed (i.e., 240 frames per second) bymodifying a single-chip
DLP projector. A DLP projector modulates the white light from a projector bulb into
grayscale intensities using a digital micromirror device (DMD), an array of tiny mir-
rors that rapidly flip back and forth. RGB colors are created at each pixel by placing a
rapidly spinning “color wheel” between the bulb and the DMD. If the color wheel is
removed, then sending a static RGB image to the projector results inmoving grayscale
fringes projected at high speed onto an object. This trick has been adopted by many
researchers in the projector-camera community.
The DLP projector is synchronized with a high-speed digital camera. Therefore, a
sequence of three successive images captured by the camera will be given by
cos
2
3
I 1
(
x , y
) =
A
(
x , y
) +
B
(
x , y
)
ψ(
x , y
)
cos ψ(
)
I 2
(
x , y
) =
A
(
x , y
) +
B
(
x , y
)
x , y
(8.7)
cos
2
3
I 3
(
x , y
) =
A
(
x , y
) +
B
(
x , y
)
ψ(
x , y
) +
where A
(
x , y
)
is the per-pixel average intensity of the three images, B
(
x , y
)
is the per-
ψ(
)
pixel amplitude of the observed sinusoid, and
is the observed phase map. We
can recover this phasemap at each pixel by combining the three observed intensities:
x , y
arctan 3
I 1
(
x , y
)
I 3
(
x , y
)
ψ(
x , y
) =
(8.8)
2 I 2 (
x , y
)
I 1 (
x , y
)
I 3 (
x , y
)
, if the surface is sufficiently smooth,
the phase can be unwrapped into a continuous function. This uniquely identifies the
column of the projected image, allowing the usual triangulation method to be used
to obtain the 3D location of each point.
This general approach has been extended in various ways to improve the speed of
image acquisition and processing. For example, Zhang and Huang [ 571 ] replaced the
While
ψ(
x , y
)
is only recovered modulo 2
π
 
Search WWH ::




Custom Search