Illumination and Shading (Basic Computer Graphics) Part 3

The Rendering Equation

Looking back over what has been covered with regard to illumination in this topic, we see lots of different formulas and approaches. Kajiya ([Kaji86]) attempted to unify the general illumination problem by expressing it in terms of finding a solution to a single equation that he called the rendering equation:

tmp1516-98_thumb_thumb

where

p and p’ are any two surface points, I(p,p’) is the intensity of light passing from p to p’, g(p,p’) is a visibility term (which is 0 if p and p’ cannot see each other and inversely proportional to the square of the distance between the points otherwise), e(p,p’) is the intensity of light emitted from p’ to p, p(p,p’,p") is related to the intensity of all light reflected towards p from a point p’ having arrived at p’ from the direction to p", and the integration is over all surfaces in the scene.

Notice that (9.12) is a recursive equation because the function I appears on both sides of the equation. Also, each wavelength has its own equation (9.12). It can be shown ([WatW92]) that most of the illumination models discussed in this topic are approximations to the rendering equation. The rendering equation does not model everything however. For example, it ignores diffraction and transparency.


Synthetic textures.

Figure 9.13. Synthetic textures.

Texture and Texture Mappings

Surfaces are usually not homogeneous with respect to any particular property such as color, intensity, etc., but they usually have a more or less uniform pattern that is called the visual texture. This pattern could be generated by physical texture, such as a rough wall surface, or markings, as on wallpaper. Sometimes a collection of objects is viewed as one, as in the case of a brick wall, and then the pattern in each determines the texture of the whole. Texture is a useful concept in understanding scenes. Without texture pictures do not look right.

What exactly is meant by texture? The characteristics of synthetic textures are easiest to explain. Examples of these are shown in Figure 9.13. It is much harder in case of natural phenomena such as sand, straw, wood, etc., but there is some uniformity. One studies texture as a property of a pattern that is uniform in a statistical or structural sense. There is a nice discussion of texture in [Neva82]. We summarize a few of the main points.

Statistical Texture Measures. Such measures are motivated by a lack of a simple pattern. One looks for average properties that are invariant over a region. An example of this is the probability distribution of single pixel attributes, such as, the mean and variance of its intensity function. Another is the use of histograms of individual pixel attributes. Better yet, one can try to detect the presence of certain features, such as the density of edges, and then compute the mean, variance, etc., of these to distinguish between “course” and “fine” textures. The Fourier transform has also been used to look for peaks since textures are patterns. Using such measures one can generate symbolic descriptions like “bloblike,” “homogeneous,” “monodirectional,” or “bidirectional.” An example of a fancier measure is a function of the type p(i, j,d, Θ) = the probability of a pair of pixels separated by a distance d in direction Θ with intensities i and j

Such measures have been used successfully for wood, grass, corn, water, etc.

Texture mappings.

Figure 9.14. Texture mappings.

Structural Texture Measures. The idea here is to find primitives from which the totality is built. This clearly is hard to compute for natural textures. There may be a hierarchy: one pattern may repeat to form a larger pattern. One can think of structural texture in a syntactical way. The primitives are a little bit like sentences of a language with a specified grammar.

Texture is introduced into graphical images by means of texture mappings. It is a way to attach detail to surfaces without a geometric model for the detail so that one can produce much more complex images without more complexity in geometric descriptions. This idea was first used by Catmull and then extended by Blinn and Newell ([BliN76]). Heckbert ([Heck86]) presents a good survey of texture maps. See also [WeiD97]. In general, texture maps can be functions of one or more variables. We concentrate on the two variable case here.

Assume that we are given a surface patch parameterized by a function p(u,v). In addition to each point p on the surface having (u,v)-coordinates, we now associate texture coordinates (s,t) and a predefined texture map T(s,t) defined on this texture coordinate space, which defines the light intensity at the point p. If p projects to screen coordinates (x,y), then the value T(s,t) is copied to frame buffer location (x,y). Basically, we have a map Φ which sends (u,v) to (s,t). See the commutative diagram in Figure 9.14(a). Usually the map Φ is a linear map and T is represented by a twodimensional array. Figure 9.14(b) shows how one can map a grid of lines onto a cylinder. The parameterization is assumed to be the map p given by

tmp1516-101_thumb[2]

With domaintmp1516-102_thumb[2]The    maptmp1516-103_thumb[2]is    given    by

tmp1516-106_thumb[2]

If T is represented by a two-dimensional array, then the intensity

tmp1516-107_thumb[2]would be associated to the pixel at (x,y). Another way to

tmp1516-108_thumb[2]

deal with repeated patterns like this is to predefine only a primitive part of the pattern and then get the repetition using the mod function as in

tmp1516-109_thumb[2]

For example, if k is 10, then we get a 10 x 10 grid on the patch.

These examples show the essential idea behind texture mappings but assume a perfect mathematical model with all computations carried out to infinite precision. Implementing such an approach involves a lot of work and care must be taken to avoid aliasing. If the rendering algorithm is a scan line algorithm, then for each screen coordinate (x,y) one needs to find the (u,v) so that p(u,v) projects to (x,y), which is time consuming. Catmull ([Catm74]) subdivided the surface until each patch projected into a single pixel. One could then map the center of each of the corresponding rectangles in (u,v) space to texture space. Unfortunately, this straightforward approach leads to aliasing. In Figure 9.14 we might miss the grid lines. Aliasing is a serious problem for texture mappings. One solution is to sample at higher resolutions and the other is to use filters before sampling. Catmull used the latter and a convolution filter. He also subdivided texture space at the same time until each of its patches mapped onto a single pixel and used the average of the texture values in that final texture patch. A better solution is the one found in [BliN76].

Another problem with the above is distortion of the texture if the parameterization is not chosen correctly. For example, if we parameterize the cylinder by

tmp1516-110_thumb[2]

then the grid pattern is no longer uniformly spaced on the cylinder. The problem is that the parameterization is not a similarity map. Few are. One simple approach that seems to be quite successful for spline surfaces is to use a chord length approximation to the original parameterization. See [WooA98]. Bier and Sloan ([BieS86]) suggested another approach to alleviate the distortion problem. The idea is to define the texture for an intermediate surface I and then use a map μ from that surface to the target surface O. Four methods have been used to define the maptmp1516-111_thumb[2]Let q = m(p).

Method 1. This method computes ,tmp1516-112_thumb[2]If R is the ray starting at q that is the reflection of the ray from the eye to q, then p is the intersection of R with the intermediate surface. See Figure 9.15(a).

Method 2. This method also computes ,tmp1516-113_thumb[2]If R is the ray starting at q in the direction of the normal to the target surface at q, then p is the intersection of R with the intermediate surface. See Figure 9.15(b).

Method 3. This is yet another method which computestmp1516-114_thumb[2]If R is the ray from the centroid of the target surface to q, then p is the intersection of R with the intermediate surface. See Figure 9.15(c).

Method 4. If R is the ray from p in the direction of the normal to the intermediate surface at p, then q is the intersection of R with the target surface. See Figure 9.15(d).

Texture mappings with intermediate surfaces.

Figure 9.15. Texture mappings with intermediate surfaces.

Some intermediate surfaces that have been used are planes, the surface of boxes, spheres, and cylinders. Using intermediate surfaces that reflect the shape of the target surface rather than always using a square or the sphere is what avoids some of the distortion. Bier and Sloan refer to this approach as “shrink wrapping” a pre-distorted surface onto an object. One could of course eliminate all distortion by letting the intermediate surface be the target surface; however, the latter was presumably too complicated to have defined the texture on it directly. One has to walk a fine line between having relatively simple intermediate surfaces and minimizing distortion. Furthermore, the map μ or μ-1 should not be too complicated.

One way to avoid the problems associated with texture maps that we mentioned above is to use three-dimensional texture maps. Specifically, we assign a texture T(x,y,z) to each world point (x,y,z). Then for each point p of an object in world coordinates we would simply use the texture T(p). In this way textures can be mapped in a nice continuous way onto objects and we have solved one of the main problems of two-dimensional texture maps. Of course, we need to be able to define such a map T(p). A table of values would now take up a prohibitive amount of space so that a procedural definition would be called for, but that is not always easy to find.

Aliasing can be problem with texture. The most common solution is to use mip-maps. Mip-mapping was developed by Williams ([Will83]) specifically for textures. Instead of a single texture, one precomputes a sequence, each successor being half the resolution of the previous one. One selects the texture for a region of an object based on its distance from the viewer to get the level of detail correct. For a more thorough description the reader can also see [WatW92] or [WatP98].

Environment Mappings

An environment mapping starts with a predefined picture on some closed surface that surrounds the entire world of objects and then maps this picture onto the objects. The difference between this and texture mappings is that the picture is mapped in a view- point dependent way.

A spherical environment mapping.

Figure 9.16. A spherical environment mapping.

As an example, consider Figure 9.16. The picture is assumed to be painted on a spherical environment surface E. We map it onto the object O as follows: To each visible point q on O we map that point p on E to which the ray from the viewpoint reflects. Nice effects can be achieved by either moving the object O or changing the viewpoint. The environment surface does not have to be a sphere. In fact, it turns out that a sphere is not a good choice because trying to paint a picture on it can easily cause distortion. A more common choice is to use a cube. One could, for example, take six pictures of a room and map these to the six sides of the cube.

Environment mappings were originally developed in [BliN76] where they were called reflection mappings. [Gree86] suggested using cubes. The whole idea of environment mappings is basically a cheap way to get the kind of reflection effects that one gets with ray tracing, but they have become popular. Large flat surfaces on objects cause problems however because the reflection angle changes too slowly.

Bump Mappings

A problem related to giving texture to objects is to make them look rough or smooth. Simply painting a “rough” texture on a surface would not work. It would not make the surface look rough but only look like roughness painted on a smooth surface. The reason for this is that the predefined texture image is using a light source direction that does not match the one in the actual scene. One needs to change the normals (from which shading is defined if one uses the Phong model) if one wants an effect on the shading. This was done by Blinn ([Blin78]), who coined the term “bump mapping.” Again, assume that we have a surface patch X parameterized by a function p(u,v). A normal vector n(u,v) at a point on the surface is obtained by taking the cross-product of the partial derivatives of p(u,v) with respect to u and v, that is,

tmp1516-121_thumb[2]

If we perturb the surface slightly along its normals, we get a new surface Y with parameterization function P(u,v) of the form

tmp1516-122_thumb[2]

 

 

 

Texturing with bump maps.

Figure 9.17. Texturing with bump maps.

where b(u,v) is the bump map or perturbation. See Figure 9.17. The vectors

tmp1516-124_thumb[2]

are normal vectors to Y at P(u,v). But, suppressing references to the parameters u and v,

tmp1516-125_thumb[2] 

 

and

tmp1516-126_thumb[2]

If we assume a small perturbation b(u,v), then it is reasonable to neglect the last terms. Therefore, N is approximated by

tmp1516-127_thumb[2]

Note that in order to compute the approximate normals for Y we do not need to know the perturbation function b(u,v) itself, but only its partial derivatives. Any function can be used as a bump function. To speed up the computation one typically uses a lookup table and interpolation. Standard approximations to the partials are

tmp1516-128_thumb[2]

 

 

tmp1516-129_thumb[2]

for suitable small value e. Thus it suffices to use a table b(i,j) and to compute b(u,v) at all other values via a simple linear interpolation. The values of the partials bu and bv are computed with formulas (9.15).

To reduce aliasing, Blinn suggested that one sample intensities at a higher resolution and then average the values.

Next post:

Previous post: