Graphics Reference
In-Depth Information
Admittedly, the relationship between ray casting and physics at the level
demonstrated here is somewhat tenuous. Real photons propagate along rays from
the light source to a surface to an eye, and we traced that path backward. Real pho-
tons don't all scatter into the camera. Most photons from the light source scatter
away from the camera, and much of the light that is scattered toward the camera
from a surface didn't arrive at that surface directly from the light. Nonetheless,
an algorithm for sampling light along rays is a very good starting point for sam-
pling photons, and it matches our intuition about how light should propagate. You
can probably imagine improvements that would better model the true scattering
behavior of light. Much of the rest of this topic is devoted to such models.
In the next section, we invert the nesting order of the loops to yield a ras-
terizer algorithm. We then explore the implications of that change. We already
have a working ray tracer to compare against. Thus, we can easily test the correct-
ness of our changes by comparing against the ray-traced image and intermediate
results. We also have a standard against which to measure the properties of the
new algorithm. As you read the following section and implement the program
that it describes, consider how the changes you are making affect code clarity,
modularity, and efficiency. Particularly consider efficiency in both a wall-clock
time and an asymptotic run time sense. Think about applications for which one of
rasterization and ray casting is a better fit than the other.
These issues are not restricted to our choice of the outer loop. All high-
performance renderers subdivide the scene and the image in sophisticated ways.
The implementer must choose how to make these subdivisions and for each must
again revisit whether to iterate first over pixels (i.e., ray directions) or triangles.
The same considerations arise at every level, but they are evaluated differently
based on the expected data sizes at that level and the machine architecture.
15.6 Rasterization
We now move on to implement the rasterizing renderer, and compare it to the
ray-casting renderer, observing places where each is more efficient and how
the restructuring of the code allows for these efficiencies. The relatively tiny
change turns out to have substantial impact on computation time, communication
demands, and cache coherence.
15.6.1 Swapping the Loops
Listing 15.22 shows an implementation of rasterize that corresponds closely to
rayTrace with the nesting order inverted. The immediate implication of inverting
the loop order is that we must store the distance to the closest known intersection
at each pixel in a large buffer ( depthBuffer ), rather than in a single float. This
is because we no longer process a single pixel to completion before moving to
another pixel, so we must store the intermediate processing state. Some imple-
mentations store the depth as a distance along the z -axis, or as the inverse of that
distance. We choose to store distance along an eye ray to more closely match the
ray-caster structure.
The same intermediate state problem arises for the ray R . We could create a
buffer of rays. In practice, the rays are fairly cheap to recompute and don't justify
storage, and we will soon see alternative methods for eliminating the per-pixel ray
computation altogether.
 
 
 
Search WWH ::




Custom Search