Graphics Reference
In-Depth Information
(BSP) tree and the hierarchical depth buffer, simultaneously address both efficient
and exact visibility by incorporating conservative tests into their iteration mecha-
nism. But often a good strategy is to combine a conservative visibility strategy for
efficiency with a precise one for correctness.
T HE CULLING PRINCIPLE : It is often efficient to approach a problem with
one or more fast and conservative solutions that narrow the space by culling
obviously incorrect values, and a slow but exact solution that then needs only
to consider the fewer remaining possibilities.
Primary visibility tells us which surfaces emit or scatter light toward a cam-
era. They are the “last bounce” locations under light transport and are the only
surfaces that directly affect the image. However, keep in mind that a global illu-
mination renderer cannot completely eliminate the points that are invisible to the
camera. This is because even though a surface may not directly scatter light toward
the camera, it may still affect the image. Figure 36.1 shows an example in which
removing a surface that is invisible to the camera changes the image, since that
surface casts light onto surfaces that are visible to the camera. Another example is
a shadow caster that is not visible, but casts a shadow on points that are visible to
the camera. Removing the shadow caster from the entire rendering process would
make the shadow disappear. So, primary visibility is an important subproblem that
can be tackled with visibility determination algorithms, but it is not the only place
where we will need to apply those algorithms.
light
The importance of indirect influence on the image due to points not visible
to the camera is why we define exact visibility as a property that we can test for
between any pair of points, not just between the camera and a scene point. A ren-
dering algorithm incorporating global illumination must consider the visibility of
each segment of a transport path from the source through the scene to the cam-
era. Often the same algorithms and data structures can be applied to primary and
indirect visibility. For example, the shadow map from Chapter 15 is equivalent to
a depth buffer for a virtual camera placed at a light source.
There are of course nonrendering applications of algorithms originally intro-
duced for visibility determination. Collision detection for the simulation of fast-
moving particles like bullets and raindrops is often performed by tracing rays as
if they were photons. Common modeling intersection operations such as cutting
one shape out of another are closely related to classic visibility algorithms for
subdividing surfaces along occlusion lines.
The motivating examples throughout this chapter emphasize primary visibil-
ity. That's because it is perhaps the most intuitive to consider, and because the
camera's center of projection is often the single point that appears in the most vis-
ibility tests. For each example, consider how the same principles apply to general
visibility tests. As you read about each data structure, think in particular about how
many visibility tests at a point are required to amortize the overhead of building
that data structure.
In this chapter, we first present a modern view of visibility following the light
transport literature. We formally frame the visibility problem as an intersection
query for a ray (“What does this ray hit first?”) and as a visibility function on pairs
of points (“Is Q visible from P ?”). We then describe algorithms that can amortize
that computation when it is performed conservatively over whole primitives for
Camera
Figure 36.1: The yellow wall is
illuminated only by light reflected
from the hidden red polygon.
Removing it will cause the yel-
low wall to be illuminated only by
light from the blue surface.
 
 
Search WWH ::




Custom Search