Graphics Reference
In-Depth Information
could be rendered in separate passes and then a single image could be produced
at the end.
As graphics developed, the particular choices of transformations to be applied
to triangles, or how values computed at vertices were to be interpolated across
triangles, or even how high-level descriptions of objects were to be converted to
triangle lists, all varied. But there were a few things that were shared by essentially
all programs: vector math, clipping, rasterization, and some amount of per-pixel
compositing and blending. The development of GPUs has reflected this—GPUs
have become more and more like general-purpose processors, except that (a) vec-
tor and matrix operations are well supported, and (b) clipping and rasterization
units remain a part of the design. The modern interface to the GPU now consists
of one or more small programs that are applied to geometric data (these are called
vertex shaders ), followed by clipping and rasterization, and one or more small
programs that are applied to the “fragments” produced by rasterization, which are
called pixel shaders or fragment shaders. A more appropriate name for what's
currently done—computing shading values for one or more samples associated
to a pixel—might be sample shaders. The programmer writes these shaders in a
separate language, and then tells the GPU in which order to use them, and how to
link them together (i.e., how to pass data from one to the other). Typically some
packages (like GL 4) provide facilities for describing the linking process, compil-
ing and loading the shaders onto the GPU, and then passing data, in the form of
triangle lists, texture maps, etc., to the GPU.
Why are these programs called shaders? In the GL version of the Lambertian
lighting model, similar to the one presented in Chapter 6, the color of a point is
computed (using GL notation) by
C = k d C d L (
·
n ) ,
(33.1)
where
is the unit direction vector to the light source, k d is a representation of
the reflectance of the material, C d is the color of the material (i.e., a red-green-
blue triple saying how much light the surface reflects in each of these wavelength
bands), L is the color of the light (again an RGB triple, which is multiplied term
by term with C d ), and n is the surface normal. In Phong lighting, another term,
involving the view vector as well, and k s , a specular constant, C s , a specular color,
and n s , the specular exponent, are added. 1 Increasingly complex combinations of
data like this, including texture data to describe surface color or surface-normal
direction, etc., got added, and the formulas for computing the color at a point got
to look more and more like general programs. Cook [Coo84] introduced the idea
that the user could write a small program as part of the modeling process, and the
rendering program could compile this program into something that executed the
proper operations. Cook called this programmable shading, although perhaps pro-
grammable lighting would be a better term for the process we've just described.
In that era, the computation entailed by lighting models was often so great that it
made sense to do much of the computation on a per-vertex basis, and then inter-
polate values across triangles; the interpolation process was called shading, and it
varied from the interpolation of the colors to the interpolation of values to be used
in computing colors. Since papers describing lighting models often also described
1. In the terminology we've used from Chapter 14 onward, these would be the “glossy”
constant, color, and exponent.
 
Search WWH ::




Custom Search