Graphics Reference
In-Depth Information
such shading approaches, the two ideas became conflated, and the term “shader”
was used for the new notion.
Modern shaders are really graphics programs rather than being restricted to
computing colors of points. There are geometry shaders, which can alter the list
of triangles to be processed in subsequent stages, and tessellation shaders, which
take high-level descriptions of surfaces and produce triangle lists from them; an
example is a subdivision surface shader, which might take as input the vertices
and mesh structure of a subdivision surface's control mesh, and produce as output
a collection of tiny triangles that form a good approximation of the limit surface.
There are also vertex shaders that serve only to transform the vertex locations,
and generally have nothing to do with eventual color.
While the typical graphics program might have a geometry shader, a tessella-
tion shader, a vertex shader, and a fragment shader, there is also the ability to turn
off any portion of the pipeline and say, “Just compute this far and then stop.” Thus,
a program might run its geometry and tessellation shader, and then return data to
the CPU, which could modify it in some way before returning it to the GPU to be
processed by the rasterization and clipping unit and then a fragment shader.
We'll describe some basic vertex and fragment shaders to give you a feel for
how shaders are related to the ideas you've seen throughout this topic.
What follows is a rough and informal description of the history of raster
graphics, from a high level.
• At the start of graphics, no one had any idea how to do anything, so we
found a way to create rasterized lines, for instance, and to draw surfaces
with flat shading.
• The next year, we thought of a new way to rasterize, and thought about
curves rather than just lines, and someone came up with a new lighting
model.
• Pretty soon, we realized that there was a higher-level problem—
rasterization of primitives—to be solved, and that lighting models
would evolve every year, and that we needed an architecture in which
that sort of thing was possible. On the other hand, there were parts
of almost every graphics program—clipping, for instance—that would
probably remain fairly constant, and appear in the same place in the
program; this was the start of the “pipeline” idea.
• Making a general-purpose language for describing lighting was too
expensive when most lighting was going to use the Phong model. So
we split into two camps: fixed function and programmable. The pro-
grammable camp's rendering was slow, but very general-purpose. The
fixed-function camp rendered things fast, but was constrained in what
sorts of rendering it could do. The only reason for the split was the dif-
ference in how people wanted to control what went on in the computer:
Some, who loved interactivity, said, “You can adjust the constants, and
I'll burn the algorithm into silicon”; the others said, “Interactivity isn't
soimportanttome...butI really want expressiveness. I can always get
more computers, but I want a programming language to describe my
output.” The first gang went on to develop the fixed-function approach,
and from an industry point of view, they were clustered around Silicon
 
 
Search WWH ::




Custom Search