Graphics Reference
In-Depth Information
1.
5. Multiply by M wind to transform the points to pixel coordinates.
This description omits two critical steps, which are the determination of the
color at each vertex, and the interpolation of these colors across the triangles as
we determine which pixels each triangle covers. The first of these was originally
called lighting , as you learned in Chapter 2, but now the two together are often
performed at each pixel by a small GPU program called a shader, and the whole
process is therefore sometimes called “shading.” These are discussed in several
chapters later in this topic. For efficiency, however, it's worth noting that lighting
is an expensive process, so it's worth delaying as late as possible so that you do
the lighting computation for a vertex (or pixel) only if it makes a difference in the
final image. The clipping stage is an ideal place to do this. You can avoid work for
all the objects that are not visible in the final output. And for many basic lighting
rules, it's possible to do the lighting after transforming to the standard perspective
view volume, or even after transforming to the standard parallel view volume,
although not after homogenization. 2 Because of this, it makes sense to do all the
clipping in the pre-homogenized parallel view volume, then do the lighting, and
finally homogenize, convert to pixel coordinates, and draw filled polygons with
interpolated colors.
4. Clip against x =
±
1, y =
±
1, and z =
P '
P
Q '
Q
Figure 13.17: The projection of
the midpoint of PQ is not the
same as the midpoint of the seg-
ment P Q .
What about interpolation of colors over the interior of a triangle, given the
color values at the corners? The answer is “It's not as simple as it looks at first.”
In particular, linearly interpolating in pixel coordinates will not work. To see this,
look at the simpler problem shown in Figure 13.17: You've got a line segment PQ
in the world, with a value—say, temperature—at each end, and the temperature
is interpolated linearly along this line segment so that the midpoint is at a tem-
perature exactly halfway between the endpoints, for instance. Suppose that line
segment is transformed into the line segment P Q in the viewport. If we take the
midpoint
P + Q
2
and compute the point it transforms to, it will in general not be
P + Q
2
P + Q
2
, so the temperature assigned to
should not be the average of the
temperatures for P and Q .
The only case where linear interpolation does work is when the endpoints P
and Q are at the same depth in the scene (measured from the eye). The classic pic-
ture of train tracks converging to a point on the horizon provides a good instance
of this. Although the crosspieces of the train track (“sleepers” in the United King-
dom, “ties” in the United States) are at constant spacing on the track itself, their
spacing in the image is not constant: The distant ties appear very close together
in the image. If we assign a number to each tie (1, 2, 3,
...
), then the tie number
varies linearly in world space, but nonlinearly in image space.
This suggests that interpolation in image space may be very messy, but the
truth is that it's also not as complicated as it looks at first. In Section 15.6.4.2 we
will return to this topic and explain how to perform perspective-correct interpola-
tion simply.
2. Many shading rules depend on dot products, and while linear transformations alter
these in ways that are easy to undo, the homogenizing transformation's effects are not
easy to undo.
 
 
Search WWH ::




Custom Search