Graphics Reference
In-Depth Information
36.9.1 Spatial Antialiasing ( xy )
Visibility under instantaneous pinhole projection, and thus coverage as well,
between points P and Q is a binary value. However, coverage between a set of
points in the scene and a set of points on the image plane can be fractional, since
those sets give rise to many possible rays that may have different binary visibility
results.
Of particular interest is the case where the region in the scene is a surface
defined by values of the function P ( i , j , t ) —a moving patch—and the region on
the image plane is a pixel. For simplicity in definitions, assume the surface is a
convex polygon so that it cannot occlude itself. We say that a surface defined by
P ( i , j , t ) fully covers the pixel when the binary visibility function to all points of
the form Q ( x , y , u , v , t ) is 1 for all parameters. We say that the surface partially
covers the pixel if the integral of the binary visibility function over the parameter
space is less than 1.
Aliasing is, broadly speaking, caused by trying to store too many values in too
few slots (see Figure 36.21 and Chapter 18). As in a game of musical chairs, some
values cannot be stored. Aliasing arises in imaging when we try to represent all
of the light from the scene that passes through a pixel using a single sample point
(e.g., the one in the center). In this case, a single, binary visibility value represents
the visibility of the whole portion of the surface that projects within the pixel area.
The single sample covers infinitesimal area. Over that area, the binary visibility
result is in fact accurate. But the pixel's area 2 is much larger, so the binary result is
insufficient to represent the true coverage—there may be only fractional coverage.
Rounding that fractional coverage to 0 or 1 creates inaccuracies that often appear
in the form of a blocky image. Introducing a better approximation of partial cov-
erage that considers multiple light paths can reduce the impact of this artifact.
The process of considering more paths (or equivalently, samples) is thus called
antialiasing.
Ideally, we'd integrate the incident radiance function over the entire pixel area,
or perhaps the support of a sensor-response function, which may be larger than a
pixel. For now, let's assume that pixels respond only to light rays that pass through
their bounds and have uniform response regardless of where within the pixel we
sample the light.
Some definitions will maintain precise languages when we distinguish
between the value of a pixel and a value at a point within the pixel. Figure 36.20
shows a single primitive (a triangle) overlaid on a pixel grid. A fragment is the
part of a pixel that lies within a given pixel. Each pixel contains one or more
samples, which correspond to primary rays. To produce an image, we need to
compute a color for each pixel. This color is computed from values at each sam-
ple. The samples, however, need not be computed independently. For example,
all of the samples in the central pixel are completely covered by one fragment,
so perhaps we could compute a single value and apply it to all of them. We refer
to the process of computing a value for one or more samples as shading, to distin-
guish it from computing coverage, that is, which samples are covered by a frag-
ment. Although our discussion has been couched in the language of physically
based rendering, “shading” applies equally well to arbitrary color computations
(e.g., for text or expressive rendering).
2. More precisely: the support of the measurement or response function for the pixel.
 
 
Search WWH ::




Custom Search