Graphics Reference
In-Depth Information
common; Chapter 20 discusses this in detail. The last is sometimes called cube
mapping, sphere mapping, or environment mapping depending on the specific
parameterization and application.
A tangent space is just a plane that is tangent to a surface at a point. A mesh's
tangent space is undefined at edges and vertices. However, when the mesh has
vertex normals there is an implied tangent space (the plane perpendicular to the
vertex normal) at each vertex. The interpolated normals across faces (and edges)
similarly imply tangent spaces at every point on the mesh. Many rendering algo-
rithms depend on the orientation of a surface within its tangent plane. For example,
a hair-rendering algorithm that models the hair as a solid “helmet” needs to know
the orientation of the hair (i.e., which way it was combed) at every point on the
surface. A tangent-space basis is one way to specify the orientation; it is simply
a pair of linearly independent (and usually orthogonal and unit-length) vectors in
the tangent plane. These can be interpolated across the surface of the mesh in the
same way that shading normals are; of course, they may cease to be orthogonal
and change length as they are interpolated, so it may be necessary to renormalize
or even change their direction after interpolation to achieve the goals of a partic-
ular algorithm. Finding such a pair of vectors at every point of a closed surface is
not always possible, as described in Chapter 25.
14.5.1.5 Cached and Precomputed Information on the Mesh
The preceding section described properties that extend the mesh representation
with additional per-vertex information. It is also common to precompute proper-
ties of the mesh and store them at vertices to speed later computation, such as
curvature information (and the adjacency information that we have already seen).
One can even evaluate arbitrary, expensive functions and then approximate their
value at points within the mesh (or even within the volume contained by the mesh)
by barycentric interpolation.
Gouraud shading is an example. We compute and store direct illumination
at vertices during the rendering of a frame, and interpolate these stored values
across the interior of each face. This was once common practice for all raster-
ization renderers. Today it is primarily used only on renderers for which the
triangles are small compared to pixels so that there is no loss of shading reso-
lution from the interpolation. The micropolygon renderers popular in the film
industry use this method, but they ensure that vertices are sufficiently dense in
screen space by subdividing large polygons during rendering until each is smaller
than a pixel [CCC87]. Per-pixel direct illumination is now considered sufficiently
inexpensive because processor performance has grown faster than screen resolu-
tions. However, it has not grown faster than scene complexity, so some algorithms
still compute global illumination terms such as ambient occlusion (an estimated
reduction in brightness due to nearby geometry) or diffuse interreflection at ver-
tices [Bun05].
The vertices of a mesh form a natural data structure for recording values that
describe a piecewise linear approximation of an arbitrary function as described in
Chapter 9. The drawback of this approach is that other constraints on the model-
ing process may lead to a tessellation that is not ideal for representing the arbi-
trary function. For example, many meshes are created by artists with the goal of
using the fewest triangles possible to reasonably approximate the silhouette of an
object. Large, flat areas of the mesh will therefore contain few triangles. If we
were to compute global illumination only at the vertices, we would find that the
 
Search WWH ::




Custom Search