Graphics Reference
In-Depth Information
one can try to rigorously simulate the illumination process; on the other, one might
be satisfied with achieving the illusion of realism. The first approach is an ideal that
inevitably takes a lot of CPU cycles. The second allows one to take shortcuts that
produce results quicker but hopefully still produce good images. Actual illumination
models used in practice differ from theoretical derivations. Hall ([Hall89]) classifies
them into three types: empirical, transitional, and analytical. The corresponding
shading techniques that evolved from these three models are classified as being incre-
mental, using ray tracing, or using radiosity methods, respectively. There are now also
hybrid approaches using the last two techniques.
The first illumination models were empirical in nature. The illumination values
were evaluated after the geometry was transformed to screen space and standard scan
line incremental approaches were used. Transitional models used more optics. They
used more object space geometry so that reflections, refractions, and shadows were
geometrically correct. Ray tracing was started. Gradually, analytical models were
developed. Hall describes the shift in approaches as “a shift in research from the
hidden surface problem, to creating realistic appearance, to simulating the behavior
that creates the appearance.” He points out that as we look at how illumination and
shading are dealt with, one finds that the two main approaches can be explained in
terms of two questions. One approach starts at the eye, considers the visible surfaces,
and asks for each visible pixel:
“What information is required to calculate the color for this surface point?”
Getting this information implies that other surfaces must be considered and so the
same question is asked recursively. Ray tracing is an example of this approach. The
second approach starts at the light sources, traces the light energy, and asks:
“How is this light reflected or transmitted by this surface?”
From this point of view, every illuminated surface becomes an emitter. This is how
radiosity works.
The first approach where we start at the eye generates a view-dependent map of
light as it moves from surfaces to the eye. Every new viewpoint calls for a new map.
Initially, rendering programs used only the ambient and diffuse component of light
([Bouk70]), and then the specular component was added ([BuiT75]). This worked
fairly well for isolated objects. Eventually, reflections were also dealt with, culminat-
ing in ray tracing. The ray-tracing program described by Kajiya ([Kaji86]) started at
the eye and sent out 40 rays per pixel.
The second approach to rendering where we start from the light generates a view-
independent map. This approach may seem extremely wasteful at first glance because
one makes computations for surfaces that are not seen. However, starting at the eye
gets very complicated if one wants to render diffuse surfaces correctly. Single reflec-
tions between n surfaces require n 2 computations. This number goes up very rapidly
as one attempts to follow subsequent reflections. Furthermore, having made the com-
putations once, one can then make as many images as one wants without any addi-
tional work.
Two important phenomena one has to watch out for when modeling light are:
Search WWH ::




Custom Search