Game Development Reference
In-Depth Information
complicated, to say the least. Not only is the physics 1 of the light bouncing
around very complicated, but so are the physiology of the sensing equipment
in our eyes 2 and the interpreting mechanisms in our minds. Thus, ignoring
a great number of details and variations (as any introductory book must
do), the basic question that any rendering system must answer for each
pixel is “What color of light is approaching the camera from the direction
corresponding to this pixel?”
There are basically two cases to consider. Either we are looking directly
at a light source and light traveled directly from the light source to our
eye, or (more commonly) light departed from a light source in some other
direction, bounced one or more times, and then entered our eye. We can
decompose the key question asked previously into two tasks. This topic
calls these two tasks the rendering algorithm, although these two highly
abstracted procedures obviously conceal a great deal of complexity about
the actual algorithms used in practice to implement it.
Visible surface determination. Find the surface that is closest to the
eye, in the direction corresponding to the current pixel.
The rendering algorithm
Lighting. Determine what light is emitted and/or reflected off this
surface in the direction of the eye.
At this point it appears that we have made some gross simplifications,
and many of you no doubt are raising your metaphorical hands to ask
“What about translucency?” “What about reflections?” “What about
refraction?” “What about atmospheric effects?” Please hold all questions
until the end of the presentation.
The first step in the rendering algorithm is known as visible surface
determination. There are two common solutions to this problem. The first
is known as raytracing. Rather than following light rays in the direction
that they travel from the emissive surfaces, we trace the rays backward, so
that we can deal only with the light rays that matter: the ones that enter
our eye from the given direction. We send a ray out from the eye in the
direction through the center of each pixel 3 to see the first object in the
scene this ray strikes. Then we compute the color that is being emitted
1 Actually, almost everybody approximates the true physics of light by using simpler
geometric optics.
2 Speaking of equipment, there are also many phenomena that occur in a camera but
not the eye, or as a result of the storage of an image on film. These effects, too, are often
simulated to make it look as if the animation was filmed.
3 Actually, it's probably not a good idea to think of pixels as having a “center,” as
they are not really rectangular blobs of color, but rather are best interpreted as infinitely
small point samples in a continuous signal. The question of which mental model is best
is incredibly important [33, 66], and is intimately related to the process by which the
pixels are combined to reconstruct an image. On CRTs, pixels were definitely not little
rectangles, but on modern display devices such as LCD monitors, “rectangular blob of
Search WWH ::




Custom Search