Graphics Reference
In-Depth Information
pipeline. In actual practice (see Chapter 38), the exact order of the tasks within
the parts (or even the parts to which they are allocated) may be altered, but a
graphics system is required to produce results as if they were processed in the
order described. Thus, the pipeline is an abstraction—a way to think about the
work being done; regardless of the underlying implementation, the pipeline allows
us to know what the results will be.
The vertex geometry part of the pipeline is responsible for taking a geometric
description of an object, typically expressed in terms of the locations of certain
vertices of a polygonal mesh (which you can think of informally as an arrangement
of polygons sharing vertices and edges to cover an object, i.e., to approximate
its surface), together with certain transformations to be applied to these vertices,
and computing the actual positions of the vertices after they've been transformed.
The polygons of the mesh, which are defined in terms of the vertices, are thus
implicitly transformed as well.
The triangle-processing stage takes the polygons of the mesh—most often
triangles—and a specification for a virtual camera whose view we are rendering,
and processes the polygons one by one in a process called rasterization, to con-
vert them from a continuous-geometry representation (triangle) into the discrete
geometry of the pixelized display (the collection of pixels [or portions of pixels]
that this triangle contains).
The resultant fragments (pixels or portions of pixels that belong to the triangle
and may eventually appear on the display if they're not obscured by some other
fragment) are then assigned colors based on the lighting in the scene, the textures
(e.g., a leopard's spots) that have been assigned to the mesh, etc.
If several fragments are associated to the same pixel location, the frontmost
fragment (the one closest to the viewer) is generally chosen to be drawn, although
other operations can be performed on a per-pixel basis (e.g., transparency compu-
tations, or “masking” so that only certain fragments get “drawn,” while others that
are masked are left unchanged). 8
In modern systems, all of this work is usually done on one or more Graphics
Processing Units (GPUs), often residing on a separate graphics card that's plugged
into the computer's communication bus. These GPUs have a somewhat idiosyn-
cratic architecture, specially designed to support rapid and deep pipelining of the
graphics pipeline; they have also become so powerful that some programmers
have started treating them as coprocessors and using them to perform computa-
tions unrelated to graphics. This idea—having a separate graphics unit that even-
tually becomes so powerful that it gets used as a (nongraphics) coprocessor—is
an old one and has been reinvented multiple times since the 1960s. In early gen-
erations, this coprocessor was typically moved closer and closer to the CPU (e.g.,
sharing memory with the CPU) and grew increasingly powerful until it became so
much a part of the CPU that designers began creating a new graphics processor
that was closely associated to the display; this was called the wheel of reincarna-
tion in a historically important paper by Myer and Sutherland [MS68]. The notion
may be slightly misleading, however, as observed by Whitted [Whi10]: “We
8. Note that the choice of a representation by a raster grid implies something about the
final results: The information in the result is limited! You cannot “zoom in” to see
more detail in a single pixel. But sometimes in computing, what should be displayed
in a single pixel requires working with subpixel accuracy to get a satisfactory result.
We'll frequently encounter this tension between the “natural” resolution at which to
work (the pixel) and the need to sometimes do subpixel computations.
 
Search WWH ::




Custom Search