Graphics Reference
In-Depth Information
The depth buffer has proved to be a powerful solution for screen-space visi-
bility determination. It is so powerful that not only has it been built in dedicated
graphics circuitry since the 1990s, but it has also inspired many image-space tech-
niques. Image space is a good place to solve many graphics problems because
solving at the resolution of the output avoids excessive computation. In exchange
for a constant memory factor overhead, many algorithms can run in time pro-
portional to the number of pixels and sublinear to, if not independent of, the
scene complexity. That is a very good algorithmic tradeoff. Furthermore, geo-
metric algorithms are susceptible to numerical instability as infinitely thin rays
and planes pass near one another on a computer with finite precision. This makes
rasterization/image-space methods a more robust way of solving many graphics
problems, albeit at the expense of aliasing and quantization in the result.
Inline Exercise 36.2: If there are T triangles in the scene and P pixels in the
image, under what conditions on T and P would you expect image-space meth-
ods to be a good approach to visibility or related problems?
Inline Exercise 36.3: Image-space algorithms seem like a panacea. Describe
a situation in which the discrete nature of image-space data makes it inappro-
priate for solving a problem.
36.3.1 Common Depth Buffer Encodings
Broadly speaking, there are two common choices for encoding depth: hyperbolic
in camera-space z , and linear in camera-space z . Each has several variations for
scaling conventions within the mapping. All have the property that they are mono-
tonic, so the comparison z 1 <
m ( z 2 ) (perhaps
with negation) so that the inverse mapping is not necessary to implement correct
visibility determination.
There are many factors to weigh in choosing a depth encoding. The operation
count of encoding and decoding (for depth-based post-processing) may be signif-
icant. The underlying numeric representation, that is, floating point versus fixed
point, affects how the mapping ultimately reduces to numeric precision. The dom-
inant factor is often the relative amount of precision with respect to depth. This is
because the accuracy of the visibility determination provided by a depth buffer is
limited by its precision. If two surfaces are so close that their depths reduce to the
same digital representation, then the depth buffer is unable to distinguish which
is closer to the ray origin or camera. This means that the visibility determina-
tion will be arbitrarily resolved by primitive ordering or by small roundoff errors
in the intersection algorithm. The resultant artifact is the appearance of individ-
ual samples with visibility results inconsistent with their neighbors. This is called
z -fighting. Often z -fighting artifacts reveal the iteration order of the rasterizer or
other intersection algorithm, which tends to cause regular patterns of small bias in
depth. Different mappings and underlying numerical representation for depth vary
the amount of precision throughout the scene. Depending on the kind of scene and
rendering application, it may be desirable to have more precision close to the cam-
era, uniform precision throughout, or possibly even high precision at some specific
depth. Akeley and Su give an extensive and authoritative treatment [AS06] of this
z 2 can be performed as m ( z 1 )
<
 
 
 
Search WWH ::




Custom Search