Graphics Reference
In-Depth Information
video sequence from a fixed camera with fixed background and a moving foreground
object, so that a “clean plate” background image without the shadow can be created.
Finlayson et al. [ 141 ] proposedmethods for removing shadows from images (e.g., the
unwanted shadow of a photographer) based on finding the edges of the shadow, esti-
mating shadow-free illumination-invariant images, and solving a Poisson equation.
Wu et al. [ 555 ] addressed a similar problem of shadow removal using a generalized
trimap (actually with four regions) in which the user specifies definitely-shadowed,
definitely-unshadowed, unknown, and shadow/object boundary regions. The algo-
rithmminimizes a Gibbs-like energy function built from the statistics of the regions.
Regardless of how a shadow is extracted, when composited into a new image it must
deform realistically with respect to the new background; some 3D information about
the scene is necessarily required for high-quality results (see Chapter 8 ).
Hillman [ 199 ] observed that in natural images, the subject is often illuminated
from behind, causing a highlight of bright pixels around the foreground boundary.
Like shadowpixels, these highlight pixels donot obey thematting equation's assump-
tion and could be estimated by assuming a mixture of three colors (foreground,
background, highlight) rather than two.
Most of the algorithms in this chapter make the underlying assumption that the
foreground object is opaque, and that fractional
values arise from sub-pixel-sized
fine features combined with blur from the camera's optics or motion. In this context,
it makes the most sense to interpret
α
as a measure of coverage by the foreground.
However, most of the methods in this chapter will fail in the presence of “optically
active” objects that are transparent, reflective, or refractive, such as a glass of water
(Figure 2.24 ). In this case, even though a pixel may be squarely in the foreground,
its color may arise from a distorted surface on the background. Pulling a coverage-
based matte of the foreground and compositing it on a new background will look
awful, since the foreground should be expected to distort the new background and
contain no elements of the old background. To address this issue, Zongker et al. [ 582 ]
proposed environment matting , a system that not only captures a coverage-based
matte of an optically active object but also captures a description of the way the
object reflects and refracts light. The method requires the object to be imaged in
the presence of different lighting patterns from multiple directions using a special
acquisition stage. The method was refined by Chuang et al. [ 98 ] and extended to
work in real-world environments by Wexler et al. [ 545 ].
α
2.10.2 Matting with Custom Hardware
Finally, we note that additional, customized hardware can greatly improve the ease
and quality of pulling amatte. Suchmethods have the advantage of not requiring user
input like a trimap or scribbles, but have disadvantages in terms of generalizability,
expense, and calibration effort.
For example, early work onmatting in Hollywood used sodium lighting to create a
yellowish background of a frequency that could be filtered fromcolor filmand used to
expose a registered strip of matte film, removing the need to unmix colors [ 393 , 517 ].
More recently, Debevec et al. [ 116 ] built a stage containing infrared light sources
and an infrared camera for difference matting, combined with a sphere of many
color LEDs that could surround an actor and produce realistic light distributions
Search WWH ::




Custom Search