Graphics Reference
In-Depth Information
List of
polygons
Shape to
world coords
Trivial
reject/accept
Vertex lighting
computations
World to
hclip coords
Æ
Æ
Æ
Æ
Clipping
To clip coords in
[0,1]¥[0,1]¥[0,1]
To frame and
Z-buffer
Screen
Æ
Æ
Æ
Æ
(a) Z-buffer algorithm and Gouraud shading
List of
polygons
Shape to
world coords
Trivial
reject/accept
World to
hclip coords
Clipping
Æ
Æ
Æ
Æ
To clip coords in
[0,1]¥[0,1]¥[0,1]
Lighting
computations
To frame and
Z-buffer
Screen
Æ
Æ
Æ
Æ
Æ
(b) Z-buffer algorithm and Phong shading
Figure 9.18.
Local illumination rendering pipelines.
more efficient since it can take advantage of coherence. Furthermore, since the image
is generated in scan line order, there are opportunities for hardware optimization
along with making anti-aliasing easier.
As an example, Figure 9.18(a) shows the rendering pipeline for a Z-buffer algo-
rithm and Gouraud shading. A polygon is first transformed into world coordinates.
Simple tests, such as back face elimination or bounding box tests, are performed to
eliminate polygons that are obviously not visible. The illumination is computed at
each vertex. This computation cannot be postponed because the perspective trans-
formation distorts the needed normal and light vectors. (Camera coordinates would
also work.) Next the object is mapped to the homogeneous clipping coordinates and
clipped. If new vertices are introduced at this stage, then we must compute illumi-
nation values for them. To get these new values we have to map the points back to
the corresponding world coordinate points and do the illumination computations for
the latter. Finally, the vertices of the clipped polygon are mapped back down to the
normalized clip coordinates in the unit cube and sent to the Z-buffer along with the
illumination information. The Gouraud algorithm will now interpolate the vertex
values over the entire object.
A second example of the rendering pipeline where we use the Z-buffer algorithm
and Phong shading is shown in Figure 9.18(b). The difference between this pipeline
and the one in Figure 9.18(a) is that, since we need normals, we cannot do any light-
ing computations until we are finished clipping. It is at that point that we need to
map all vertices back to world or camera coordinates. The normals, their interpolated
Search WWH ::




Custom Search