Graphics Reference
In-Depth Information
large regions of constant color and bold edges between regions, representing an
abstraction of the scene according to the parts-and-structures kind of hierarchy
present in many computer-vision algorithms. To effect this transformation, they
use an eye-tracked human viewer. Broadly speaking, the eye tracking allows them
to determine which elements of the image are most attention-grabbing, and thus
deserve greater detail. Figure 34.22 shows a sample input and result.
It's also possible to consider a stroke-based rendering of a scene over time, and
try to simplify its strokes in a way that's coherent in the time dimension; to do so
effectively requires a strong understanding of the perception of motion (for strokes
that vary in position or size) and change (for strokes that appear or disappear).
34.7 Discussion and Further Reading
The two abstraction techniques presented in this chapter fall into the simplification
and factorization categories. Is it possible to also do schematization? Can we learn
schematic representations from large image and drawing databases, for instance?
This remains to be seen.
Much of what's been done in expressive rendering until now has emulated
traditional media and tools. But the computer presents us with the potential to
create new media and new tools, and thinking about these may be more productive
than trying to imitate old media. Two examples of this are the diffusion curves
of Orzan et al. [OBW + 08] and the gradient-domain painting of McCann and
Pollard [MP08]. Each relies on the idea that with the support of computation, it's
reasonable for a user's stroke to have a global effect on an image.
In the case of gradient-domain painting, the user edits the gradient of an image
using a familiar digital painting tool interface. A typical stroke like a vertical line
down the middle of a gray background will create a high gradient at the stroke
so that the gray to the left of the stroke becomes darker and the gray to the right
becomes lighter, and the stroke itself ends up being an edge between regions of
differing values. (This is an “edge” in the sense of computer vision, by the way.)
By adjusting the stroke width and the amount of gradient applied, the user can
get varying effects. The user can also grab a part of an existing image's gradient
and use that as a brush, allowing for further interesting effects. To be clear: The
image the user is editing is not precisely the gradient of the final result; rather,
an integration process is applied to the gradient to produce a final image with the
property that its true gradient is as near to the user-sketched gradient as possible.
Figure 34.23 shows an example of photo editing using gradient-domain painting
with a brush whose gradient is taken from elsewhere in the image.
In diffusion curves, the user again has a familiar digital painting interface, but
in this case each stroke draws boundary conditions for a diffusion equation: In the
basic form, on one side of the stroke the image is constrained to have a certain
color; on the other side it has a different color. The areas in between the strokes
have colors determined from the stroke values by diffusion (i.e., each interior pixel
is the average of its four closest neighbors). Again, a single stroke can drastically
affect the whole image's appearance. But if we think instead about perceptually
significant changes, we see the effects are quite local: In the nonstroke areas, the
values change very smoothly so that there are no perceptually significant edges.
Thus, the medium of diffusion curves allows the artist to work directly with per-
ceptually significant strokes. Figure 34.24 shows an example of the results.
Figure 34.22: (Top to bottom)
The input photograph, the eye-
tracker fixation record, and the
resultant image for DeCarlo and
Santella's abstraction and sim-
plification algorithm. (Courtesy
of Doug DeCarlo and Anthony
Santella,
©2002
ACM,
Inc.
Reprinted by permission.)
 
 
Search WWH ::




Custom Search