Graphics Reference
In-Depth Information
T HE EARLY OPTIMIZATION PRINCIPLE : It's worth optimizing early if it makes
the difference between an interactive program and one that takes several min-
utes to execute. Shortening the debugging cycle and supporting interactive test-
ing are worth the extra effort.
15.8.5 Improving the Asymptotic Bound
To scale to truly large scenes, no linear-time rendering algorithm suffices. We must
somehow eliminate whole parts of the scene without actually touching their data
even once. Data structures for this are a classic area of computer graphics that
continues to be a hot research topic. The basic idea behind most of these is the
same as behind using tree and bucket data structures for search and sort problems.
Visibility testing is primarily a search operation, where we are searching for the
closest ray intersection with the scene. If we precompute a treelike data structure
that orders the scene primitives in some way that allows conservatively culling a
constant fraction of the primitives at each layer, we will approach O(log n ) -time
visibility testing for the entire scene, instead of O( n ) in the number of primitives.
When the cost of traversing tree nodes is sufficiently low, this strategy scales well
for arbitrarily constructed scenes and allows an exponential increase in the num-
ber of primitives we can render in a fixed time. For scenes with specific kinds of
structure we may be able to do even better. For example, say that we could find an
indexing scheme or hash function that can divide our scene into O( n ) buckets that
allow conservative culling with O( 1 ) primitives per bucket. This would approach
O( d ) -time visibility testing in the distance d to the first intersection. When that
distance is small (e.g., in twisty corridors), the runtime of this scheme for static
scenes becomes independent of the number of primitives and we can theoretically
render arbitrarily large scenes. See Chapter 37 for a detailed discussion of algo-
rithms based on these ideas.
15.9 Discussion
Our goal in this chapter was not to say, “You can build either a ray tracer or a
rasterizer,” but rather that rendering involves sampling of light sources, objects,
and rays, and that there are broad algorithmic strategies you can use for accu-
mulating samples and interpolating among them. This provides a stage for all
future rendering, where we try to select samples efficiently and with good statisti-
cal characteristics.
For sampling the scene along eye rays through pixel centers, we saw that
three tests—explicit 3D ray-triangle tests, 2D ray-triangle through incremental
barycentric tests, and 2D ray-triangle through incremental edge equation tests—
were mathematically equivalent. We also discussed how to implement them so
that the mathematical equivalence was preserved even in the context of bounded-
precision arithmetic. In each case we computed some value directly related to the
barycentric weights and then tested whether the weights corresponded to a point
on the interior of the triangle. It is essential that these are mathematically equiv-
alent tests. Were they not, we would not expect all methods to produce the same
image! Algorithmically, these approaches led to very different strategies. That is
 
 
 
 
Search WWH ::




Custom Search