Illumination and Shading (Basic Computer Graphics) Part 2

Simple Approaches to Shading

The last section described some simple illumination models and how one can use them to compute the illumination at each point of an object. In this section we show how this information is used to implement shading in a modeling program. The details depend on

(1)  the visible surface determination algorithm that is used,

(2)  the type of objects that are being modeled (linear or curved), and

(3)  the amount of work one is willing to do.

Constant Shading. The simplest way to shade is to draw all objects in a constant color.A more sophisticated model would draw each polygon in the scene with a constant shade determined by its normal and one of the described illumination models.

Constant shading that used facet normals would work fine for objects that really were linear polyhedra, but if what we were doing was approximating curved objects by linear polyhedra, then the scene would look very faceted and not smooth. To achieve more realistic shading, we first consider the case of a world of linear polyhedra.

Gouraud Shading. Gouraud’s scan line algorithm ([Gour71]) computes the illumination at each vertex of an object using the desired illumination model and then computes the illumination at all other points by interpolation. As an example, consider Figure 9.6. Assuming that the illumination is known at the vertices A, B, C, and D, one computes it at the point X as follows: Let P and Q be the points where the edges AC and BD intersect the scan line containing X, respectively. Compute the illumination at P and Q by linearly interpolating the illumination values at A, C, and B, D, respectively. The value at X is then the linear interpolation of those two values.


To get the exact illumination values at vertices, normals are needed. These normals are typically computed by taking the average of adjacent face normals. One needs to realize though that doing this will have the effect of smoothing out an object. This is what one wants if our objects are faceted representations of curved objects. On the other hand, sometimes one wants to show edges, as in the case of a cube, and then we need to avoid this smoothing operation. When we shade such a face, the normals of the vertices for that face should be the same as the face normal.

Gouraud shading example.

Figure 9.6. Gouraud shading example.

Gouraud shading anomaly.

Figure 9.7. Gouraud shading anomaly.

Phong shading anomaly.

Figure 9.8. Phong shading anomaly.

Some anomalies can occur with Gouraud shading. Figure 9.7 shows a faceted surface which might have been an approximation to a wavy surface, but whose Gouraud shading would make it look flat because of the averaging of the normals. The simplest way to get around that is to subdivide the given facets more. In the case of Figure 9.7, we could divide each rectangular face into four subfaces.

There are other problems. If one uses Phong’s reflectance model, then small variations in the normal can produce large variations in the specular reflection. This causes Gouraud’s method to produce peculiar highlights.

Phong Shading. To remedy some of the problems with Gouraud shading, Phong ([BuiT75]) interpolated the surface normals (rather than the intensities) at vertices across faces and then computed the illumination at each point directly from the normals themselves. This clearly takes more work, however. In particular, generating unit-length vectors means taking square roots, which is costly in time. To lessen this cost, one can use a lookup table and linear interpolation ([Duff79]). Alternatively, one can use a Taylor expansion to get a good approximation ([BisW86]). The latter approach produces pictures indistinguishable from real Phong shading at a cost that is not much greater than that of Gouraud shading.

Phong shading produces better results than Gouraud shading, but it also has problems. Consider the concave polygon in Figure 9.8. The difference between the interpolated normal at the point P and the normal at the vertex V could cause a big change in the illumination value as one moves from P to V. This again shows the importance of sampling properly.

Gouraud and Phong shading basically assumed a scan line visible surface algorithm approach. In that context, one can speed up the process by computing the shading equations incrementally.

Global Illumination Models

The problem with the simple models discussed in Section 9.2 is that they are only local illumination models in the sense that they totally ignore the contribution of other objects in the world. We have assumed that the light comes to a point directly from a single source (with no shadows) and have dealt only with reflections from a single surface when in reality light often reflects from several surfaces before reaching the eye. Transparency properties have also been ignored. This section looks at some global aspects of illumination and how to deal with them.

Shadows

In trying to produce realistic pictures, one will quickly discover that they do not look that great if we omit shadows. A good discussion of shadow algorithms can be found in [WoPF90] and [WatW92]. The algorithms are classified into the following approaches:

(1)    as part of early scan line visible surface algorithms,

(2)    as part  of the Weiler-Atherton visible surface algorithm ([AtWG78]),

(3)    using “shadow volumes” ([Crow77b]),

(4)    using “shadow z-buffers” ([Will78]).

(5)    as part  of ray tracing, and

(6)    as part of  radiosity methods.

We shall not go into the first two of these approaches. Approach (2) amounted to running the basic Weiler-Atherton algorithm twice and involved lots  of clipping of polygons.

In the shadow volumes approach one extends each edge of an object that is an outline. See Figure 9.9. The volume between the tetrahedron ABCD and its projection A’B'C’D’ from the light source on some fixed distant plane is called the shadow volume generated by the object ABCD. The faces obtained in this way bound a volume in which light has been obscured. For several light sources we get several such which are tagged by the light source. Real polygons along with these shadow ones are passed to the visible surface determination algorithm. Potentially many shadow polygons, that is, faces of shadow volumes, will lie between the viewpoint and a surface. One uses a parity algorithm to determine if a point on a surface is in a shadow.

Shadow volumes.

Figure 9.9. Shadow volumes.

Using parity with shadow polygons.

Figure 9.10. Using parity with shadow polygons.

If an even number of polygons with the same tag are encountered, then the point is not shadowed by that light source, otherwise it is. Figure 9.10 shows how the parity works for spans associated to a polygon P. In Figure 9.10(a) object P is not in the shadow of S because there are two shadow polygons determined by A and B that lie between it and the eye. In Figure 9.10(b) object P is in the shadow of S because only a single shadow polygon, the one containing A, lies between it and the eye. Although we have only a single pass through the visibility algorithm when using shadow volumes, there are more objects and they are more complex.

The shadow z-buffer approach to shadows first determines the visibility of objects with respect to the light source and then uses this information in a second pass that determines the visibility with respect to the viewer. This amounts to running the visible surface determination algorithm twice. The approach has the advantage of being easy to integrate into a standard ζ-buffer algorithm. One uses an extra buffer, called the shadow z-buffer, for each light source. (One can get away with only one buffer for multiple light sources by running the basic algorithm once for each light source.) In the case of a single light source, here are the two steps in this algorithm:

Step 1. One runs a z-buffer type algorithm from the point of view of the light source to "render" the scene into the shadow z-buffer. In this pass only the depths of points are recorded and no light computations are made.

Step 2. One runs the standard z-buffer algorithm from the viewer, but with the following change: Before storing any light information into the frame buffer for a visible point, the coordinates of that point are first transformed into the light source coordinate system and its distance from the light is compared with the value stored for that position in the shadow z-buffer. If its value is larger than the stored value, then the point is in a shadow and the frame buffer value is not updated.

Ray-tracing and radiosity methods will be described shortly and in the next topic. Computing shadows in the context of ray tracing is very easy. Basically, once we have determined a visible point, all we have to is to send out a ray toward the light source and see if it hits an object on the way. Radiosity methods handle shadows by in large automatically, except some extra processing is required along the outlines of shadows.

Finally, we need to point out that one needs to distinguish between hard and soft shadow algorithms. We have discussed the former that simply determine in a “true” or “false” manner whether a point is in a shadow. One associates either 0 or the normal light intensity value to the point. Because of this, all that is involved is a visibility determination of points in the umbra region of a shadow (the points that receive no light). A soft shadow algorithm also includes a penumbra region in the computation (the points that receive partial light). We refer the reader to the above-listed references to see how algorithms can be modified to accomplish this. The references also touch on the much more difficult problem of shadows when there are transparent objects present.

Transparency

There are two ways to model transparency. The simple way is to ignore refraction and pretend that light will pass straight through an object without any bending. The other is to take refraction into account.

Using the simple model with no refraction, transparency can be implemented by letting surfaces only partially obscure surfaces behind them. If a painter’s algorithm is used in the visible surface determination algorithm, then one can overwrite (in the back-to-front manner) each face on the current background with a blending function of the type

tmp1516-87_thumb

where If and Ib are the illumination associated to the face and current background, respectively, and kt is a transparency factor. As usual, there is a separate equation (9.9) for each wavelength. One problem with this approach is that one loses any specular reflection. To avoid this one can restrict the interpolation to the ambient and diffuse component of the light and then add the specular component from the front object. Adding transparency in this way greatly increases the cost of the rendering algorithm because every surface is painted whether visible or not.

Using refraction in the transparency model increases the cost even further because light no longer travels in a straight line and one has to compute how it is bent. Consider a light ray passing from one medium to another at a point p on their common boundary. See Figure 9.11. Let L and T be the unit vectors associated to the light paths in the first and second medium as shown in the figure. Let Θ1 (the angle of incidence) be the angle between L and the unit normal vector N to the boundary surface at a point and Θ2 (the angle of refraction), the angle between T and N. Snell’s law states that

tmp1516-88_thumb

 

where n1 and n2 are the index of refraction of the first and second medium, respectively.

The basic refraction model.

Figure 9.11. The basic refraction model.

A medium’s index of refraction is the ratio of the speed of light in a vacuum to the speed of light in the medium. The index of refraction varies with wavelength. For example, it is 1.0 if the medium is a vacuum and close to 1.0 in the case of air. For other material it is larger than 1.0. Water has an index of refraction of 1.33; crown glass, 1.52; and dense flint glass, 1.66.

To compute the refracted ray T, let U be the unit vector obtained by normalizing the orthogonal projection of -L onto the plane orthogonal to N. It is easy to see that

tmp1516-90_thumb

Since

tmp1516-91_thumb

substituting the formula for U into this expression and rearranging terms gives

tmp1516-92_thumb

where

tmp1516-93_thumb

But

tmp1516-94_thumb

so that

tmp1516-95_thumb

Note that formula (9.11) has a square root in it. This can be imaginary in certain circumstances. For example, if the index of refraction of the second medium is lower than that of the first medium, then Θ2 is larger than Θ1. It is therefore possible for Θ2 to become larger than 90°. In that case, the light is reflected and no light enters the second medium. This phenomenon is referred to as total internal reflection and the smallest angle Θ1 at which this occurs is called the critical angle. Since sinΘ2 has value 1 there, Snell’s law shows that the critical angle is sin-1(n2/n1). Usually one is not interested in this angle and one only wants to know if total reflection has occurred, so that one simply checks if the quantity under the square root sign in the formula for T is negative.

Basic ray tracing.

Figure 9.12. Basic ray tracing.

Ray Tracing

Dealing with shadows and transparency handles the nonlocal property of illumination in only a very narrow way. There is a lot more interaction between objects. For example, two glossy objects near one another reflect off each other. The models above do not deal with this. Whitted ([Whit80]) is usually credited with implementing the first comprehensive global illumination model. He suggested a recursive ray-tracing algorithm, which can be represented as a tree. See Figure 9.12. One follows each component ray and finds its intersection with all surfaces. Whitted used a formula of the type

tmp1516-97_thumb

where the new rtIt term models the transparency effects due to refraction. This worked quite well. The next topic will study ray tracing in detail and so we say no more about it here. Ray tracing is very expensive computationally. Whitted used bounding volumes (spheres) to eliminate objects with respect to whether a given ray intersects it. He also incorporated antialiasing.

A lot of work has been done on ray tracing. Its best results can be seen in pictures where there are lots of reflections between objects. The pictures are not totally real however because these reflections come out too sharp. This does not happen in real life where things are fuzzier. Ray tracing captures the specular interaction between objects very well but not the diffuse interactions. Radiosity methods deal with the latter.

Radiosity Methods

First implemented in 1984 at Cornell University ([GoTG84]), radiosity methods are view-independent solutions that are based on the conservation of light in a closed world. The term radiosity, derived from the literature on heat transfer, refers to the rate at which energy leaves a surface and is the sum of the rates at which energy is emitted, reflected, or transmitted from that surface to others. We shall give more details in the next topic. Suffice it to say that the method is more complex than ray tracing but it produces more realistic pictures even though it does not handle specular light correctly. It is possible to combine ray tracing and the radiosity approach.

Next post:

Previous post: