Graphics Reference
In-Depth Information
or other things (i.e., it was treated as ordinary memory). Thus, the particular
meaning of “texture” is rather time-dependent; when you read a paper on the
subject, you'll need to know when it was written to know what the term means.
A typical use of texture mapping is to make something that looks painted, like
a soft-drink can. First you make a 2D image I that looks like an unrolled version of
the vertical sides of the can (see Figure 20.1). Then you give the image coordinates
u and v that range from 0 to 1 in the horizontal and vertical directions. Then you
model a cylinder, perhaps as a mesh of a few hundred polygons based on vertex
locations like
Figure 20.1: Texture image for
soda can (Courtesy of Kefei Lei).
P ij = r cos 2
,
10 , h j
π
i
5 , r sin 2
i
10
π
(20.1)
where r is the can's radius, h is its height, and i = 0,
...
, 10 and j = 0,
...
,5(see
Figure 20.2). A typical triangle might have vertices P 11 , P 12 , and P 21 .
Now you also assign to each vertex so-called uv -coordinates: a u - and a v -value
at each vertex. In this example, the u -coordinate of vertex P ij would be i
/
10 and
/
the v -coordinate would be j
5. Notice that the vertices P 0,0 and P 10,0 are in identi-
cal locations (at the “seam” of the can), but they have different uv -“coordinates.”
Because coordinates should be unique to a point (at least in mathematics), it might
make more sense to call these uv -“values,” but the term “coordinates” is well
established. We will, however, refer henceforth to texture coordinates rather than
uv -coordinates,” both because sometimes we use one or three rather than two
coordinates, and because the tying of concepts to particular letters of the alphabet
can be problematic, as in the case where a single mesh has two different sets of
texture coordinates assigned to it.
Figure 20.2: A wireframe render-
ing of the vertical surface of the
soda can (Courtesy of Kefei Lei).
When it comes time to render a triangle, it gets rasterized, that is, broken
into tiny fragments, each of which will contribute to one pixel of the final result.
The coordinates of these fragments are determined by interpolating the vertex
coordinates, but at the same time, the renderer (or graphics card) interpolates the
texture coordinates. Each fragment of a triangle gets different texture coordinates.
During the rendering step in which a color is computed for the fragment, often
based on the incoming light, the direction to the eye, the surface normal, etc., as
in the Phong model of Chapter 6, the material color is one of the items needed in
the computation. Texture mapping is the process of using the texture coordinate
for the fragment to look up the material color in the image I rather than just using
a fixed color. Figure 20.3 shows the effect.
We've omitted many details from this brief description, including a step in
which fragments are further reduced to samples, but it conveys the essential idea,
which has been generalized in a great many ways.
A value (e.g, the color) associated to a fragment of a triangle is almost always
the result of a computation, one that has many parameters such as the incom-
ing light, the surface normal, the vector from the surface to the eye, the surface
color (or other descriptions of surface scattering like the bidirectional reflectance
distribution function or BRDF), etc. Ordinarily, many of these parameters either
are constants or are computed by interpolating values from the triangle's vertices.
If instead we barycentrically interpolate some texture coordinates from the tri-
angle vertices, these coordinates can be used as arguments to one or more func-
tions whose values are then used as the parameters. A typical function is “look
Figure 20.3: The sides of the
soda can texture-mapped with the
image (Courtesy of Kefei Lei).
 
 
Search WWH ::




Custom Search