Illumination and Shading (Introduction to Computer Graphics Using Java 2D and 3D) Part 5

Textures in Java 3D

Java 3D provides a variety of methods to apply textures to surfaces. It is possible to specify in detail how the texture should be attached to the vertices of a surface that is modelled by polygons. The details of these methods will not be included in this introductory topic. It should be sufficient here to explain how a texture can be applied to a surface without worrying about how to position it exactly.

First of all, an image is needed for the texture. The image can be loaded from a file and is then transformed into an instance of the class ImageComponent2D with the method getScaledImage. A scaling to a specified width w and height h is carried out. The values of w and h must be powers of two, i.e., they must be chosen from the set {1, 2,4, 8,16, 32, 64,···}.

tmpc009-471_thumb[2][2][2][2]

Then an instance of the class Texture2D is generated which is then assigned to an instance of the class Appearance.

tmpc009-472_thumb[2][2][2][2]


The following lines of code assign the texture to the Appearance textureApp:

tmpc009-473_thumb[2][2][2][2]

The Appearance textureApp can then be assigned to an elementary geometric object or a Shape as usual. Depending on the size of the surface, the texture is applied more than once to cover the surface completely. The program TextureExample.java loads a texture from an image file and applies it to a sphere.

In Java 3D textures can also be used as background images easily. The colour of the background has already been changed in some of the example programs for better illustration purposes, for example in the program StaticSceneExample.java. Changing the colour of the background requires an instance of the class Background. The desired colour is assigned to the Background as an argument in the constructor. When a Background has been created in this way it also needs a bounding region where it should be valid. Thebounding region is specified with the method setApplicationBounds, as usual in the form of a BoundingSphere bounds or a bounding box. Then the background can be added to the scene with the method addChild.

tmpc009-474_thumb[2][2][2][2]

If an image from a file image.jpg should be used as a background instead of a fixed colour, the image must be loaded first using a TextureLoader. Then the constructor of Background is called with the image instead of the colour in the above example. The image is obtained from the TextureLoader with the method

tmpc009-475_thumb[2][2][2][2]

The definition of a bounding region and the assignment of the background to the scene is required in the same way as for a fixed colour as background. In the program BackgroundExample.java, an image is loaded from a file and the image is then used as the background of a scene.

The Radiosity Model

It was already mentioned for the illumination model introduced in Sect. 8.3 that an object emitting light is not considered as a light source in the scene and does not contribute to the illumination of other objects, except a light source is added to the scene where the object is located. But, in principle, all objects in the scene emit light, namely, the light they reflect. Ambient light is introduced to model this effect in a very simplified way. Unrealistic sharp edges and contrasts are the consequence of this simplification. For example, if a dark and bright wall meet in a corner of a room, the bright wall will reflect some light to the darker wall. This effect is especially visible where the bright wall meets the dark wall, i.e., at the edge between the two walls. The illumination model from Sect. 8.1 and the reflection and shading models in Sect. 8.3 ignore the interaction of the light between the objects. This results in a representation of the two walls as can be seen in Fig. 8.19 on the left-hand side. A very sharp edge is visible between the two walls. In the right part of the figure the effect was taken into account that the bright wall will reflect light to the dark wall. Therefore, the dark wall becomes slightly less dark close to the corner and the edge is less sharp.

Environment mappings from Sect. 8.8 are a simple approach to take this effect into account. Environment mappings are not used as textures but for modelling the light that is cast by other objects onto a surface. For this purpose, shading is first computed as described in Sect. 8.3, neglecting the interaction of light between objects. Then an environment map is determined for each object considering the other objects as light sources. The shading resulting from the environment maps is then added to the intensities that were computed before for the object’s surface. Of course, in the ideal case, this process should be repeated again and again until more or less no changes occur in the intensities anymore. But this is not acceptable from the computational point of view.

Illumination among objects

Fig. 8.19 Illumination among objects

The radiosity model [4, 6] avoids these recursive computations. The radiosity Bi is the rate of energy emitted by a surface Oi in the form of light. This rate of emitted energy is a superposition of the following components when only diffuse reflection is considered.

• Ei is the rate at which light is emitted from surface Oi as an active light source. Ei will be zero for all objects in the scene except for the light sources.

•    The light coming from light sources and other objects that is reflected by surface Oi. If the surface Oi is the part of the dark wall in Fig. 8.19 close to the corner and Oj is the corresponding part of the bright wall, the light reflected by Oj that comes from Oi is computed as follows:

tmpc009-477_thumb[2][2][2][2]

Qi is the reflection coefficient of the surface Oi, Bj is the rate of energy coming from Oj. Bj has still to be determined. Fji is a dimensionless form or configuration factor specifying how much of the energy coming from Oj reaches Oi. In Fji, the shape, the size and the relative orientation of the surfaces Oi and Oj are taken into account. For example, when the surfaces are perpendicular, less light will be exchanged among them compared to the case that they face each other directly. The calculation of the form factors will be explained later on.

• For transparent surfaces, the light coming from behind the surface must also be taken into account. Since this will make the matter more complicated, transparent surfaces will not be considered here for the radiosity model.

The total rate of energy coming from the surface Oi is the sum over these single rates. For n surfaces including the light sources, this leads to the equations

tmpc009-478_thumb[2][2][2][2]

Taking these equations for the surfaces together leads to a system of linear equations with unknown variables Bi.

tmpc009-479_thumb[2][2][2][2]

 

Determination of the form factors

Fig. 8.20 Determination of the form factors

This system of linear equations must be solved for the primary colours red, green and blue. The number of equations is equal to the number of surfaces or surface patches—usually triangles—plus the number of light sources. The latter will be very small compared to the number of triangles. The system can have hundreds or thousands of equations and variables. In most cases it will be a sparse system of equations, in which most of the entries are zero since most of the surfaces do not exchange any light so that most of the form factors will be zero.

For the computation of the form factor from the surface Ai to the surface Aj, both surfaces are partitioned into small area patches dAi and dAj. The influence between the patches is computed and added. Since the patches should be arbitrarily small, the sum will become an integral. If the patch dAj is visible from dAi , then the differential form factor with the notation from Fig. 8.20 is

tmpc009-481_thumb[2][2][2][2]

dFdi,dj decreases quadratically with increasing distance according to attenuation. The angle at which the patches face each other is also important. If the patches face each other directly, the form factor has the largest value. In this case, the normal vectors to the patches are parallel and point in opposite directions. The form factor decreases with increasing angle and becomes zero at 90°. For angles larger than 90° when the patches face in opposite directions, the cosine would lead to negative values. For this reason, the factor

tmpc009-482_thumb[2][2][2][2]

is introduced so that the differential form factor becomes

tmpc009-483_thumb[2][2][2][2]

 

Determination of the form factors according to Nusselt

Fig. 8.21 Determination of the form factors according to Nusselt

By integration the differential form factor for the patch dAi to the surface Aj is obtained.

tmpc009-485_thumb[2][2][2][2]

Another integration finally yields the form factor from the surface Ai to the surface Aj.

tmpc009-486_thumb[2][2][2][2]

For small surfaces, i.e., for small polygons, an approximate value for the form factor is calculated in the following way. A hemisphere is constructed over the surface Ai with the centre of gravity of Ai as the midpoint of the hemisphere. The surface Aj is projected onto the hemisphere and then this projection is projected to the circle which the hemisphere defines. The proportion of the circle that is covered by this projection determines the form factor. This principle is illustrated in Fig. 8.21. The quotient of the dark grey area and the circle is the form factor. A simpler, but less accurate approximation for the form factor was proposed by Cohen and Greenberg [3]. In this approach, the hemisphere is replaced by a half-cube.

There are also algorithms to find approximate solutions of the system of linear equations (8.10) to estimate the radiosity value Bi. The progressive refinement approach [2] determines the values Bi in (8.9) by stepwise improved approximate solutions of the system of linear equations (8.10). In the first step, only the light sources are taken into account and Bi = Ei is defined for all object i. In the first step, all objects remain completely dark. Only the light sources emit light. In addition to estimations for the values Bi, the algorithm also uses values ABi, which are updated in each step. ABi specifies the change of the radiosity of object i since it has last been considered as a light source to illuminate the other objects. In the beginning ABi = Ei is defined. Then the light source or object Oi0 with the greatest change, i.e., with the greatest value for ABi0, is chosen. In the first step, this would be the light source with the highest intensity. Then all Bi -values are updated by

tmpc009-487_thumb[2][2][2][2]

The changes ABi are updated by

tmpc009-488_thumb[2][2][2][2]

The light emitted from the object Oi0 so far is distributed over the objects in this way. Then the next object with the largest value ABi0 is chosen and the update schemes (8.11) and (8.12) are applied again. This procedure is repeated until convergence is reached, i.e., until all ABi are almost zero. But the procedure can be stopped earlier as well yielding a reasonable approximation. If the computation time is strictly limited as in the case of interactive graphics, a good approximation of the radiosity values is obtained in this way, under the restrictions for the computation time.

Radiosity models produce more realistic images than the simplified illumination models neglecting the reflections of light between objects. But the required computations are too complex to be used for real-time interactive graphics. A radiosity model can nevertheless be applied in static scenes and animations where the computations can be carried out in advance like in the case of an animated movie. The fast and ever-improving graphics cards might allow the application of approximation techniques like progressive refinement in the near future even for interactive real-time graphics. Radiosity models can be used to calculate diffuse reflection in advance when the light sources are fixed and there are not too many moving objects in a scene. The results from the radiosity model are stored as light maps and are applied as textures to the objects in the real-time graphics. The real-time computations only have to take care of specular reflection.

Ray Tracing

The ray casting technique presented in Sect. 7.3.3 in the context of visibility considerations is a simple ray tracing model whose principle can also be used for shading. Rays are cast from the viewer or for parallel projection parallel through the pixels on the projection plane. Each ray is traced until it hits the first object. There the usual calculations for shading are carried out. In Fig. 8.22, the ray hits the pyramid first. In this point the light from the light sources is taken into account for diffuse reflection. The ray is then further traced as in the case of specular reflection, only in the opposite direction, not starting from a light source but from the viewer. When the ray hits another object, this backwards specular reflection is traced again until a maximum depth for the recursion is reached, until no object is hit or until the ray meets a light source. At the end of this procedure, the shading of the first object has to be corrected by the possible contribution of light from specular reflection at other objects along the reflected ray.

Recursive ray tracing

Fig. 8.22 Recursive ray tracing

This technique is called ray tracing or recursive ray tracing. The computational demands are quite high for this technique.

Next post:

Previous post: