Graphics Reference
In-Depth Information
tracing, each pixel is processed to completion before moving to the next, so this
involves running the entire visibility loop for one pixel, maintaining the shading
inputs for the closest-known intersection at each iteration, and then shading after
that loop terminates. In rasterization, pixels are processed many times, so we have
to make a complete first pass to determine visibility and then a second pass to do
shading. This is called an early-depth pass [HW96] if it primes depthBuffer
so that only the surface that shades will pass the inner test. The process is called
deferred shading if it also accumulates the shading parameters so that they do not
need to be recomputed. This style of rendering was first introduced by Whitted and
Weimer [WW82] to compute shading independent from visibility at a time when
primary visibility computation was considered expensive. Within a decade it was
considered a method to accelerate complex rendering toward real-time rendering
(and the “deferred” term was coined) [MEP92], and today its use is widespread
as a further optimization on hardware platforms that already achieve real time for
complex scenes.
For a scene that has high depth complexity (i.e., in which many triangles
project to the same point in the image) and an expensive shading routine, the
performance benefit of an early depth test is significant. The cost of rendering a
pixel without an early depth test is O( tv + ts ) , where t is the number of triangles, v
is the time for a visibility test, and s is the time for shading. This is an upper bound.
When we are lucky and always encounter the closest triangle first, the performance
matches the lower bound of Ω( tv + s ) since we only shade once. The early-depth
optimization ensures that we are always in this lower-bound case. We have seen
how rasterization can drive the cost of v very low—it can be reduced to a few
additions per pixel—at which point the challenge becomes reducing the number
of triangles tested at each pixel. Unfortunately, that is not as simple. Strategies
exist for obtaining expected O( v log t + s ) rendering times for scenes with certain
properties, but they significantly increase code complexity.
15.8.4 When Early Optimization Is Good
The domain of graphics raises two time-based exceptions to the general rule of
thumb to avoid premature optimization. The more significant of these excep-
tions is that when low-level optimizations can accelerate a rendering algorithm
just enough to make it run at interactive rates, it might be worth making those
optimizations early in the development process. It is much easier to debug an
interactive rendering system than an offline one. Interaction allows you to quickly
experiment with new viewpoints and scene variations, effectively giving you a
true 3D perception of your data instead of a 2D slice. If that lets you debug faster,
then the optimization has increased your ability to work with the code despite the
added complexity. The other exception applies when the render time is just at the
threshold of your patience. Most programmers are willing to wait for 30 seconds
for an image to render, but they will likely leave the computer or switch tasks
if the render time is, say, more than two minutes. Every time you switch tasks
or leave the computer you're amplifying the time cost of debugging, because on
your return you have to recall what you were doing before you left and get back
into the development flow. If you can reduce the render time to something you are
willing to wait for, then you have cut your debugging time and made the process
sufficiently more pleasant that your productivity will again rise despite increased
code complexity. We enshrine these ideas in a principle:
 
 
Search WWH ::




Custom Search