Graphics Reference
In-Depth Information
application of compositing was proposed by Chu et al. [ 95 ], whose goal was to create
realistic “camouflage images” containing hidden elements, as in a child's picture
book. Agarwala et al. [ 9 ] extended the ideas in Section 3.3 to build a “panoramic video
texture,” a seamlessly looping moving image created from a video shot by a single
panning camera. Rav-Acha et al. [ 384 ] showed how to create a similar effect, as well
as nonlinear temporal edits of a video, such as the manipulation of a race to create a
different winner.
While the topic is outside the scope of this chapter, the patch-based approach
to inpainting in Section 3.4.2 is an application of texture synthesis , the problem of
creating a large chunk of realistic, natural texture from a small example. Major work
in this area includes that of Wei and Levoy [ 539 ], Ashikhmin [ 20 ], Efros and Freeman
[ 128 ], and Hertzmann et al. [ 197 ]. We also note that inpainting can be generalized to
apply to non-image-based scenarios, such as filling in holes in a depth image or 3D
triangle mesh [ 221 , 124 ].
An early hybrid approach to retargeting was proposed by Setlur et al. [ 437 ], who
used an importance map to compute ROIs, removed these from the image, and
inpainted the holes to create an “empty” background image. This image is uniformly
resized to thedesireddimensions, and theROIs pastedbackonto thenewbackground
in roughly the same spatial relationship.
Krähenbühl et al. [ 255 ] made an interesting observation that naïve retargeting can
introduce aliasing into the resulting image, manifesting as blurring of sharp edges,
and proposed cost function terms to preserve the original image gradients, as well as
a low-pass filter to limit the spatial frequencies in the retargeted image. Mansfield et
al. [ 313 ] showed that when a user-supplied depthmapwas available, a seam-carving-
based approach could be used to create retargeted images that respected depth
ordering and could even contain realistically overlapping foreground objects. Cheng
et al. [ 90 ] described reshuffling applications specifically in the context of scenes with
many similar repeated elements.
Rav-Acha et al. [ 383 ] proposed an interesting approach to video editing of a fore-
ground object that undergoes pose changes (e.g., a person's rotating head). They
estimated a non-photorealistic 2D texture map for the 3D object and its mapping
to each image in the video sequence. The user performs editing/compositing oper-
ations directly on the texture map, which is then warped to produce a retextured
video.
This chapter should provide fairly convincing evidence that today it is very difficult
to tell if a digital image resulted from an untouched photograph of a real scene or if it
has been manipulated. This is great news for visual effects in movies, but somewhat
unsettling for photographs in other spheres of life that we expect to be trustworthy
(e.g., newspaper photographs of historic events, photographic evidence in trials). In
fact, a new field of digital forensics has arisen to detect such tampering. Farid pro-
vided an excellent general overview [ 134 ] and technical survey [ 133 ] of techniques for
detecting whether a digital image has been manipulated. Such techniques include
the detection of telltale regularities in pixel correlations, or inconsistencies in JPEG
quantization/blocking, camera transfer function and/or noise, lighting directions,
and perspective effects. Lalonde et al. [ 261 , 260 ] also noted the importance of match-
ing lighting, camera orientation, resolution, and other cues to make a composite
image look convincing.
Search WWH ::




Custom Search