Graphics Reference
In-Depth Information
Figure 36.12: Rendering of a scene (left), and a visualization of its depth buffer (right).
surface is visible to the camera. Yet when rendering is complete, correct visibility
is ensured.
Second, after the scene is rendered, the depth buffer describes the first scene
intersection for any ray from the center of projection through a sample. Because
the position of each sample on the image plane and the camera parameters are all
known, the depth value of a sample is the only additional information needed to
reconstruct the 3D position of the sample that colored it.
Third, after the scene is rendered, the depth buffer can directly evaluate the
visibility function relative to the center of projection. For camera-space point Q ,
V (( 0, 0, 0 ) , Q )= 1 if and only if the depth value at the projection of Q is less than
the depth of Q .
The second and third applications deserve some more explanation of why one
would want to solve visibility queries after rendering is already completed. Many
rendering algorithms make multiple passes over the scene and the framebuffer.
The ability to efficiently evaluate ray intersection queries and visibility after an
initial pass means that subsequent rendering passes can be more efficient. One
common technique exploiting this is the depth prepass [HW96]. In that pass,
the renderer renders only the depth buffer, with no shading computations per-
formed. Such a limited rendering pass may be substantially more efficient than
a typical rendering pass, for two reasons. First, fixed-function circuitry can be
employed because there is no shading. Second, minimal memory bandwidth is
required when writing only to the depth buffer, which is often stored in com-
pressed form [HAM06].
Note that a depth buffer must be paired with another algorithm such as raster-
ization for finding intersections of primary rays with the scene. Chapter 15 gives
C++ code for ray casting and rasterization implementations of that intersection
test. The rasterization implementation includes the code for a simple depth buffer.
That implementation assumes that all polygons lie beyond the near clipping plane
(see Chapter 13 for a discussion of clipping planes). This is to work around one
of the drawbacks of the depth buffer: It is not a complete solution for visibility.
Polygons need to be clipped against the near plane during rasterization to avoid
the projection singularity at z = 0. The depth buffer can represent depth values
behind the camera; however, rasterization algorithms are awkward and often inef-
ficient to implement on triangles before projection. As a result, most rasterization
algorithms pair a depth buffer with a geometric clipping algorithm. That geomet-
ric algorithm effectively performs a conservative visibility test by eliminating the
parts of primitives that lie behind the camera before rasterization. The depth buffer
then ensures correctness at the screen-space samples.
 
Search WWH ::




Custom Search