Graphics Reference
In-Depth Information
processing. With the advent of image-based rendering (the synthesis of new views
of a scene from one or more photographs or renderings of previous views), certain
problems arose, such as “What pixel values should I fill in for the parts of the
scene that weren't visible in the previous view, but are in this one?” If it's a matter
of just a pixel or two, filling in with colors from neighboring pixels is good enough
to fool the eye, but for larger regions, hole filling is a serious (although obviously
underdetermined) problem. Problems of hole filling, combining multiple blurred
images to create an unblurred image, compositing multiple images when no a pri-
ori masks are known, etc., are at the heart of the emerging field of computational
photography. Other aspects of computational photography are the development
of cameras with coded apertures (complex masks inside the lens assembly of a
camera), and of computational cameras, in which processing built into the camera
can adjust the image-acquisition process. Information on Laplace image fill that
we provide on this topic's website gives just a slight notion of the power of these
techniques.
There's no clear dividing line between “images” and “rectangular arrays
of values.” Organizing graphics-related data in rectangular arrays is powerful
because once it's in this form, any kind of per-cell operation can be applied. But
there are even more general things that fit into a broad definition of “image,” and
you should open your mind to further possibilities. For instance, we often store
samples, many per pixel area, which are then used to compute a pixel value. We'll
see this when we discuss rendering, where we often shoot multiple rays near a
pixel center and average the results to get a pixel value. These multiple values are,
for practical reasons, often taken at fixed locations around the pixel center, making
it easy to compare them, but they need not be. It's essential, of course, to record
the semantics of the samples, just as we earlier suggested recording the seman-
tics of the pixel values. Images containing these multiple values are generally not
meant for display—instead, they provide a spatial organization of information that
can be converted to a form useful for display or other reuse of the data.
Arrays of samples which are to be combined into values for display require
that the combination process must itself be made explicit. In rendering, the “mea-
surement equation,” discussed in Section 29.4.1, makes this explicit.
The notion of coverage, or alpha value, as developed by Porter and Duff,
has become nearly universal. At the same time, it has been extended somewhat.
Adobe's PDF format [Ado08], for instance, defines for each object both an “opac-
ity” and a “shape” property for each point. A shape value of 0.0 means that the
point is outside the object, and 1.0 means it's inside. A value like 0.5 is used to
indicate that the point is on the edge of a “soft-edged” object. The product of
the shape and opacity values, on the other hand, corresponds to the alpha value
we've described in this chapter. These two values can then be used to define quite
complex compositing operations.
17.8 Exercises
Exercise 17.1: The blend operation can be described by what happens to a point in
a pixel that's in neither the opaque part of U nor the opaque part of V ,injust U ,in
just V , or in both. Give such a description. Is the U -and- V part of the composition
consistent with our assumptions about the distribution of the opaque parts of each
individual pixel?
 
 
Search WWH ::




Custom Search