Graphics Reference
In-Depth Information
point ( u , v ) as corresponding to the point that's 2
π u of the way around
the torus in one direction, and 2
v around it in the other direction, with
the texture stretched over the whole torus. The texture coordinates define
a mapping from your object to the surface of a torus.
π
For either interpretation to make sense, the texture-map values on the right-
hand side of the unit square must match up nicely with those on the left (and
similarly for the top and bottom), or else the texture will be discontinuous on the
u = 0 circle of the torus, and similarly for the v = 0 circle. Furthermore, any filter-
ing or image processing you do to the texture image must involve a “wraparound”
to blend values from the left with those from the right, and from top to bottom
as well.
The second generalization was already suggested by the second version of
bump mapping: The texture coordinates define a map of your object onto a surface
in some higher-dimensional space. In the case of bump mapping, each object point
is sent to a unit vector, that is, the codomain is the unit sphere within 3-space.
In general, the appropriate target for texture coordinates depends on their
intended use.
Here are a few more examples.
• You can use the actual coordinates of each point as texture coordinates
(perhaps scaling them to fit within a unit cube first). If you then generate a
cubical “image” that looks like marble or wood, and use the texture value as
the color at each point, you can make your object look as if it were carved
from marble or wood. In this case, your texture uses a lot of memory, but
only a small part of it is ever used to color the model. The codomain is
a cube in 3-space, but the image of the texture-coordinate map is just the
surface of your object, scaled to fit within this cube.
• A nonzero triple ( u , v , w ) of texture coordinates (typically a unit vector)
is converted to ( u
/
t , v
/
t , w
/
t ) where t = 2 max(
|
u
|
,
|
v
|
,
|
w
|
) ; the result is
1
a triple with one coordinate equal to
±
2 , and the other two in the range
2 , 2 ] , that is, a point on the face of the unit cube. Each one of the six
faces of the cube (corresponding to u , v ,or w being + 2 or
1
[
1
2 ) is associ-
ated to its own texture map. This provides a texture on the unit sphere in
which the distortion between each texture-map “patch” and the sphere is
relatively small. This structure is called a cube map, and it is a standard
part of many graphics packages; it's the currently preferred way to specify
spherical textures. Alternatives, like the latitude-longitude parameteriza-
tion of the sphere, are useful in situations where the high distortion near
the poles is unimportant (as in the case of a world map, where the area near
the poles is all white).
In the event that the cube map needs to be regenerated often (e.g., if it's an
environment map generated by rendering a changing scene from the point
of view of the object), rendering the scene in six different views may be
more work than you want to do. A natural alternative is to make two hemi-
spherical renderings, recording light arriving from direction xyz T at
position ( u , v )= x + 1
2
2 in one image for y
z + 1
,
0 and another for
y
79 % of the area of
the unit square, but they're very easy to compute and use. (An alternative
two-patch solution is the dual paraboloid of Exercise 20.4.)
0. Each of these renderings uses only
π/
4
 
Search WWH ::




Custom Search