Game Development Reference
In-Depth Information
perform level of detail (LOD) selection or generate geometry proce-
durally. We discuss a number of issues related to delivering geometry
to the rendering API in Section 10.10.2.
Vertex-level operations. Once the rendering API has the geometry
in some triangulated format, a number of various operations are per-
formed at the vertex level. Perhaps the most important such oper-
ation is the transformation of vertex positions from modeling space
into camera space. Other vertex level operations might include skin-
ning for animation of skeletal models, vertex lighting, and texture
coordinate generation. In consumer graphics systems at the time of
this writing, these operations are performed by a user-supplied micro-
program called a vertex shader. We give several examples of vertex
and pixel shaders at the end of this chapter, in Section 10.11.
Culling, clipping, and projection. Next, we must perform three oper-
ations to get triangles in 3D onto the screen in 2D. The exact order in
which these steps are taken can vary. First, any portion of a triangle
outside the view frustum is removed, by a process known as clipping,
which is discussed in Section 10.10.4. Once we have a clipped poly-
gon in 3D clip space, we then project the vertices of that polygon,
mapping them to the 2D screen-space coordinates of the output win-
dow, as was explained in Section 10.3.5. Finally, individual triangles
that face away from the camera are removed (“culled”), based on the
clockwise or counterclockwise ordering of their vertices, as we discuss
in Section 10.10.5.
Rasterization. Once we have a clipped polygon in screen space, it is
rasterized. Rasterization refers to the process of selecting which pixels
on the screen should be drawn for a particular triangle; interpolating
texture coordinates, colors, and lighting values that were computed
at the vertex level across the face for each pixel; and passing these
down to the next stage for pixel shading. Since this operation is
usually performed at the hardware level, we will only briefly mention
rasterization in Section 10.10.6.
Pixel shading. Next we compute a color for the pixel, a process known
as shading. Of course, the innocuous phrase “compute a color” is the
heart of computer graphics! Once we have picked a color, we then
write that color to the frame buffer, possibly subject to alpha blending
and z-buffering. We discuss this process in Section 10.10.6. In today's
consumer hardware, pixel shading is done by a pixel shader, which is a
small piece of code you can write that takes the values from the vertex
shader (which are interpolated across the face and supplied per-pixel),
and then outputs the color value to the final step: blending.
Search WWH ::




Custom Search