Graphics Reference
In-Depth Information
This was the preferred depth encoding until fairly recently. It was preferred
because it is mathematically elegant and efficient in fixed-function circuitry to
express the entire vertex transformation process as a matrix product. However, the
widespread adoption of programmable vertex transformations and floating-point
buffers in consumer hardware has made other formats viable. This reopened a
classic debate on the ideal depth buffer representation. Of course, the ideal rep-
resentation depends on the application, so while this mapping may no longer be
preferred for some applications, it remains well suited for others. More than stor-
age precision is at stake. For example, algorithms that expect to read world-space
distances from the depth buffer pay some cost to reconstruct those values from
warped ones, and the precision of the world-space value and cost of recovering it
may be significant considerations.
Linear The terms linear z , linear depth, and w -buffer describe a family of
possible values that are all linear in z .The“ w ” refers to the w -component of a point
after multiplication by a perspective projection matrix but before homogeneous
division.
These representations include the direct z -value for convenience; the positive
“depth” value
z ; the normalized value ( z + n )
/
( n
f ) that is 0 at the near
plane and 1 at the far plane; and 1
f ) , which happens to have
nice precision properties in floating-point representation [LJ99]. In fixed point
these give uniform world-space depth precision throughout the camera frustum,
which makes z -fighting consistent in depth and can simplify the process of assign-
ing decal offsets and other “epsilon” values. Linear depth is often conceptually
(and computationally!) easier to work with in pixel shaders that require depth as
an input. Examples include soft particles [Lor07] and screen-space ambient occlu-
sion [SA07].
( z + n )
/
( n
36.4 List-Priority Algorithms
The list-priority algorithms implicitly resolve visibility by rendering scene ele-
ments in an order where occluded objects have higher priority, and are thus hidden
by overdraw later in the rendering process. These algorithms were an important
part of the development of real-time mesh rendering.
Today list-priority algorithms are employed infrequently because better alter-
natives are available. Spatial data structures can explicitly resolve visibility for
ray casts. For rasterization, the memory for a depth buffer is now fast and inex-
pensive. In that sense, brute force image-space visibility determination has come
to dominate rasterization. But the depth buffer also supports an intelligent algo-
rithmic choice. Early depth tests and early depth rendering passes avoid the inef-
ficiency of overdrawing samples, and today's renderers spend significantly more
time shading samples than resolving visibility for them because shading models
have grown very sophisticated. So a list-priority visibility algorithm that increases
shading time is making the expensive part of rendering more expensive. Despite
their current limited application, we discuss three list-priority algorithms.
However, the implicit and refreshingly simple approach of implicit visibility
by priority is a counterpoint to the relative complexity of something like hier-
archical occlusion culling. There are also some isolated applications, especially
graphics for nonraster output, where list priority may be the right approach. We
 
 
Search WWH ::




Custom Search