Graphics Reference
In-Depth Information
terization may also be used in the interpolation of coordinates in the
artist-assigned texture-coordinate methods above.
- Use an algorithmic approach to break your surface into patches, each
of which has little enough curvature that it can be mapped to the plane
with low distortion, and then define multiple texture maps, one per
patch. The cube-map approach to texturing a sphere is an example of
this approach. Filtering such texture structures requires either looking
past the edge of a patch into the adjacent patch, or using overlapping
patches, as mathematicians do when they define manifold structures.
The former approach uses textures efficiently, but involves algorithmic
complications; the latter wastes some texture space, but simplifies fil-
tering operations.
- Have an artist “paint” the texture directly onto the object, and develop a
coordinate mapping (or several patches as above) as the artist does so.
This approach is taken by Igarashi et al. in the Chameleon sys-
tem [IC01]. In a closely related approach, the painted texture is stored
in a three-dimensional data structure so that the point's world-space
coordinates serve as the texture coordinates. These are used to index
into the spatial data structure and find the texture value that's stored
there. Detailed textures are stored by using a hierarchical spatial data
structure like an octree: When the artist adds detail at a scale smaller
than the current octree cell, the cell is subdivided to better capture the
texture [DGPR02, BD02].
Normal-vector projection onto a sphere: The texture coordinates assigned
to a point ( x , y , z ) with unit normal vector n = n x n y n z T are treated as
a function of n x , n y , and n z , either as the angular spherical polar coordinates
of the point ( n x , n y , n z ) (the radial coordinate is always 1), or by using a
cube-map texture indexed by n .
There are even generalizations in which the texture coordinates are not
assigned to a point of an object, but instead are a function of the point and some
other data. Environment mapping is one of these: In this case the texture coordi-
nates are derived from the reflected eye vector on a mirrorlike surface.
There are also generalizations in which the Noncommutativity principle is
applied: Certain operations, like filtering multiple samples to estimate average
radiance arriving at a sensor pixel, can be exchanged with other operations, like
the reflection computation used in environment mapping, without introducing too
much error. If you want to environment-map a nonmirror surface, you'll want to
compute many scattered rays from the eye ray, look up arriving radiance for each
of them in the environment map, multiply appropriately by a BRDF value and
cosine, and average. You can instead start with a different environment map that
at each location stores the average of many nearby samples from the original envi-
ronment map (i.e., a blurred version of the original). You can then push one sam-
ple of this new map through the BRDF and cosine to get a radiance value: You've
swapped the averaging of samples with the convolution of light against the BRDF
and cosine. These operations do not, in fact, commute, so the answers you pro-
duce will generally be incorrect. But they are often, especially for almost-diffuse
surfaces, quite good enough for most purposes, and they speed up the rendering
substantially. Approaches like this are grouped under the general term reflection
mapping.
 
Search WWH ::




Custom Search