Graphics Reference
In-Depth Information
paths from the point to each are defined by line segments of known length and
orientation. Projecting the 3D point into the image space of the shadow map gives
a 2D point. At that 2D point (or, more precisely, at a nearby one determined by
rounding to the sampling grid for the shadow map) we previously stored the dis-
tance from the light to the first scene point, that is, the key information about the
line segment. If that stored distance is equal to the distance from the 3D point
to the 3D light source, then there must not have been any occluding surface and
our point is lit. If the distance is less, then the point is in shadow because the light
observes some other, shadow-casting, point first along the ray. This depth test must
of course be conservative and approximate; we know there will be aliasing from
both 2D discretization of the shadow map and its limited precision at each point.
Although we motivated shadow maps in the context of rasterization, they may
be generated by or used to compute shadowing with both rasterization and ray
casting renderers. There are often reasons to prefer to use the same visibility strat-
egy throughout an application (e.g., the presence of efficient rasterization hard-
ware), but there is no algorithmic constraint that we must do so.
When using a shadow map with triangle rasterization, we can amortize the
cost of perspective projection into the shadow map over the triangle by performing
most of the computational work at the vertices and then interpolating the results.
The result must be interpolated in a perspective-correct fashion, of course. The
key is that we want to be perspective-correct with respect to the matrix that maps
points in world space to the shadow map, not to the viewport.
Recall the perspective-correct interpolation that we used for positions and tex-
ture coordinates (see previous sidebar, which essentially relied on linearly inter-
polating quantities of the form
z ). If we multiply world-space
vertices by the matrix that transforms them into 2D shadow map coordinates but
do not perform the homogeneous division, then we have a value that varies linearly
in the homogeneous clip space of the virtual camera at the light that produces the
shadow map. In other words, we project each vertex into both the viewing cam-
era's and the light camera's homogeneous clip space. We next perform the homo-
geneous division for the visible camera only and interpolate the four-component
homogeneous vector representing the shadow map coordinate in a perspective-
correct fashion in screen space. We next perform the perspective division for the
shadow map coordinate at each pixel, paying only for the division and not the
matrix product at each pixel. This allows us to transform to the light's projec-
tive view volume once per vertex and then interpolate those coordinates using the
infrastructure already built for interpolating other elements. The reuse of a gen-
eral interpolation mechanism and optimization of reducing transformations should
naturally suggest that this approach is a good one for a hardware implementation
of the graphics pipeline. Chapter 38 discusses how some of these ideas manifest
in a particular graphics processor.
u
/
z and w =
1
/
15.6.6 Beyond the Bounding Box
A triangle touching O( n ) pixels may have a bounding box containing O( n 2 )
pixels. For triangles with all short edges, especially those with an area of about
one pixel, rasterizing by iterating through all pixels in the bounding box is very
efficient. Furthermore, the rasterization workload is very predictable for meshes
of such triangles, since the number of tests to perform is immediately evident
from the box bounds, and rectangular iteration is generally easier than triangular
iteration.
 
 
Search WWH ::




Custom Search