Illumination and Shading (Introduction to Computer Graphics Using Java 2D and 3D) Part 4

Shadows

An important aspect, which has been neglected for shading so far, is shadows. “Casting a shadow” is not an active matter, but simply the lack of light from a light source that does not reach the object’s surface with the shadow on it. The illumination equation including shadows becomes

tmpc009-458_thumb[2][2][2]

This is the same illumination equation as (8.6) except for the additional factors

tmpc009-459_thumb[2][2][2]

When does the light of a light source reach a surface and when is it blocked by another object leading to a shadow? The problem of determining shadow is the same, only the light source instead of the viewer has to be considered. When a surface is visible for a light source, then sj = 1 and there is no shadow for this light source on the surface. When the surface is not visible from the light source, then sj = 0 and a shadow is cast on the object. Figure 8.14 shows a shadow on a cube caused by a tetrahedron which blocks the light from a light source from above. Shadow does not mean that the surface will be black. Ambient light will still be reflected. And if there is more than one light source in the scene, a surface might be blocked from one light source, but not from the others.


The connection between shadows and visibility determination is exploited by the two-pass z- or two-pass depth buffer algorithm. In the first pass of this algorithm, the standard z-buffer algorithm is carried out with the following modifications. The viewer is replaced by a light source.

Shadow on an object

Fig. 8.14 Shadow on an object

For a directional light source, a parallel projection in the opposite direction of the direction of the light is applied. For a point light source and a spotlight, a perspective projection is applied with its centre of projection at the position of the light source. In all cases the projection is reduced to a parallel projection to the x/y-plane by a suitable transformation Tl . In this first pass of the two-pass z-buffer algorithm only the values for the z-buffer Zl are entered. The frame buffer and its calculations are not needed. The second pass of the algorithm is identical to the standard z-buffer algorithm for the viewer with the following modification.

A transformation Tv turning the perspective projection with the viewer as the centre of projection into a parallel projection to the x/y-plane is needed as usual. The viewer z-buffer Zv is also treated as usual in the second pass of the algorithm. But before a projection is entered into the frame buffer Fv for the viewer, an illumination test is carried out to check whether the surface is illuminated by the considered light source. If the coordinates of a point on the surface to be projected are (xV,yV,zV), the transformation

tmpc009-461_thumb[2][2][2]

yields the coordinates of the same point from the viewpoint of the light source. T-1 is the inverse transformation, i.e., the inverse matrix to Tv . The value zL is compared to the entry in the z-buffer Zl for the light source at the position (xL,yL). If a smaller value than zL is entered in the z-buffer Zl at this position, then there must be an object between the light source and the considered surface so that this surface does not receive any light from this light source. The surface is in the shadow of this light source, and the corresponding factor Sj in (8.7) must be set to zero. When there is more than one light source in the scene, the first pass of the algorithm is carried out for each light source. In the second pass it is determined for each light source whether a surface receives light from the corresponding light source and the factors Sj are chosen correspondingly.

Transparency

Transparent surfaces reflect a part of the light, but objects behind them can also be seen. A typical transparent object is a coloured glass pane. Transparency means that only a fraction of the light of the objects behind the transparent surface can pass through the transparent surface, but no distortion as with frosted glass happens. Such objects like milk glass are called translucent. Translucent surfaces will not be considered here. Refraction will also not be taken into account.

In order to explain how transparency is modelled, a surface F2 is considered that is positioned behind a transparent surface Fi. For interpolated or filtered transparency a transmission coefficient £transp e [0, 1] is needed. £transp specifies the fraction of light that can pass through the transparent surface F1. The surface is completely transparent, i.e., invisible, for ktransp = 1. For ktransp = 0, the surface is not transparent at all and can be handled in the same way as surfaces have been treated so far. The colour intensity IP of a point P on the transparent surface F1 is determined by

tmpc009-462_thumb[2][2][2]

where I1 is the intensity of the point that would result if the surface F1 would be treated like a normal nontransparent surface. I2 is the intensity of the corresponding point on the surface F2 when the surface F1 would be completely invisible or completely removed from the scene. The values I1 for red, green and blue result from the colour assigned to the transparent surface. In this way it is also possible to model coloured glass panes, although this model is not correct from the theoretical point of view. A colour filter would be the correct model.

Transparent surfaces complicate the visible surface determination. Especially when the z-buffer algorithm is used, the following problems can occur.

•    Which z-value should be stored in the z-buffer when a transparent surface is projected? If the z-value of an object O behind the transparent surface is kept, an object between O and the transparent surface could overwrite the frame buffer later on completely, although it is located behind the transparent surface. If instead the z-value of the transparent surface is used, then the object O would not be entered into the frame buffer although it should be visible behind the transparent surface.

•    Which value should be entered in the frame buffer? If interpolated transparency is computed according to (8.8), the information about the value I1 is lost for other objects that might be located directly behind the transparent surface. Even the value I1 would not be sufficient. It is possible to apply alpha-blending. Since the coding of RGB-values requires three bytes and blocks of four bytes are handled more efficiently in the computer, it is common to use the fourth byte for an alpha-value.

50% (left) and 25% (right) screen-door transparency

Fig. 8.15 50% (left) and 25% (right) screen-door transparency

This alpha-value corresponds to the transmission coefficient ktransp for transparency. But even with this alpha-value it is not clear to which object behind the transparent surface alpha-blending should be applied, i.e., how to apply (8.8), since the choice of the object for alpha-blending depends on the z-value.

Opaque objects should be entered first for the z-buffer algorithm and afterwards the transparent surfaces. When the transparent surfaces are entered, alpha-blending should be applied for the frame buffer. There will still be problems when transparent surfaces cover other transparent surfaces from sight. In this case, the order in which they are entered must be correct, i.e., from back to front. For this purpose, it is common to sort the transparent surfaces with respect to their z-coordinates.

Screen-door transparency is an alternative solution based on a similar principle as halftone techniques from Sect. 4.5. The mixing or interpolation of the colours of a transparent surface with an object behind it as defined in (8.8) is not carried out per pixel but per pixel group. A transmission coefficient of ktransp = 0.25 would mean that every fourth pixel obtained its colour from the object behind the transparent surface and the other pixels obtain the colour from the transparent surface. Figure 8.15 illustrates this principle for magnified pixels. The darker colour comes from the transparent surface, the lighter colour from an object behind it. For the left-hand side of the figure ktransp = 0.5 was used, for the right-hand side ktransp = °·25.

Screen-door transparency is well suited for the z-buffer algorithm. The z-values are chosen according to the surface they come from. Either the transparent one or a surface behind it. For ktransp = 0·25, 75% of the pixels would have the z-value of the surface and the other 25% the z-value of the object behind it. An object that is projected later on in the z-buffer algorithm will be treated correctly. If it is in front of the transparent surface, it will overwrite everything. If it is behind another object to which screen-door transparency has been applied already, it will not be entered at all. If the object is behind the transparent surface and closer than all other objects that were entered there before, the corresponding fraction of the pixels will automatically get the colour from this object.

Although screen-door transparency works well together with the z-buffer algorithm, the same problems as for halftone techniques occur. The results are only acceptable when the resolution is high enough. For a transmission coefficient of about 50% the results for screen-door and interpolated transparency are almost indistinguishable. But for transmission coefficients close to one or zero, screen-door transparency tends to show dot patterns instead of a realistic transparency effect.

Transparency in Java 3D

Java 3D provides the class TransparencyAttributes to model transparency. The method setTransparencyMode defines the chosen type of transparency, i.e., interpolated or screen-door transparency. The transmission coefficient is specified with the method setTransparencysetTransparency as a float-value between zero and one. The instance of the class TransparencyAttributes must then be assigned to an Appearance app by the method setTransparencyAttributes.

tmpc009-464_thumb[2][2][2]

The second line chooses interpolated transparency by specifying BLENDED. For screen-door transparency, BLENDED has to be replaced by SCREEN_DOOR. The program TransparencyExample.java demonstrates the use of these two types of transparency.

Textures

Textures are images on surfaces of objects. A simple texture might use a colour gradient or a pattern instead of the same colour on the surface everywhere. Modelling a wallpaper with a pattern on it, needs a texture to be assigned to the walls. In this case, multiple copies of the same texture are attached to the surface. A picture hanging on a wall could also be modelled by a texture which would be applied only once.

Textures are also used to model fine structures like ingrain wallpaper, wood grain, roughcast or even brick patterns. In contrast to a normal smooth wallpaper, an ingrain wallpaper has a fine three-dimensional structure that can be felt and seen. The same applies to a bark of a tree, a wall of bricks or pleats on clothes. The correct way to model such small three-dimensional structures would be an approximation by extremely small polygons. However, the effort for modelling as well as the computational effort for rendering are unacceptable.

A texture is an image that is mapped to a surface as is sketched in Fig. 8.16. A texture map Ti is defined that maps the surface or its vertices to the pixel raster of the image for the texture. When a pixel of the screen or projection plane is interpreted as a small square, then this square corresponds to a small area on the surface. This small area is mapped by the texture map to the image for the texture. In this way, the corresponding texels—the pixels of the texture image—can be determined to calculate the colour for the pixel. This colour value has to be combined with the information from illumination taking into account whether the surface with the texture is shiny or not.

Using a texture

Fig. 8.16 Using a texture

Modelling a mirror by a reflection mapping

Fig. 8.17 Modelling a mirror by a reflection mapping

Textures are useful for a variety of problems in computer graphics. A background texture like a clouded sky can be defined. This texture is not assigned to any surface but simply used as a constant background. More complex illumination techniques like the radiosity model introduced in Sect. 8.10 lead to more realistic images but are too slow for interactive real-time graphics. Under certain conditions, textures can be used to calculate diffuse reflection with these techniques in advance and apply the results as textures to the surfaces in the form of so-called light maps so that only specular reflection is needed for the real-time graphics.

Environment or reflection mapping is a technique to model mirrors or reflecting surfaces like the surface of calm water. For this purpose, the viewer is first reflected at the corresponding surface. Then the image is computed which the reflected viewer would see. This image is then used as a texture for the reflecting surface when the image for the original position of the viewer is computed. Figure 8.17 illustrates this idea.

When textures are used to model small three-dimensional patterns like reliefs, viewing them from a shorter distance might give the impression of a flat image, especially when there is a strong light source. No information about the threedimensional structure is contained in the image for the texture itself. In order to provide a more realistic view without representing the three-dimensional structure with extremely small polygons, bump mappings [1] are introduced. The surface to which the texture is applied still remains flat. But in addition to the colour information coming from the image of the texture, a bump map is used to modify the normal vectors of the surface.

 Bump mapping

Fig. 8.18 Bump mapping

A bump map assigns to each texture point a perturbation value B(i, j) specifying how much the point on the surfaces should be moved along the normal vector for the relief. If the surface is given in parametric form and the point to be modified is P = P(x(s, t), y(s,t), z(s, t)), then the nonnormalised modified normal vector at P is obtained from the cross product of the partial derivatives with respect to s and t.

tmpc009-468_thumb[2][2][2]

If B(T(P)) = B(i, j) is the corresponding bump value, one obtains

tmpc009-469_thumb[2][2][2]

as the lifted or perturbed point on the surface with the relief structure. A good approximation for the new normal vector in this point is then given by

tmpc009-470_thumb[2][2][2]

In this way, bump mapping can induce varying normal vectors on a flat plane. Figure 8.18 shows how normal vectors modelling a small dent can be applied to a flat surface.

Next post:

Previous post: