Graphics Reference
In-Depth Information
1.13.3 Light Capture
The sensor in a camera and the human eye both respond to light in a similar way:
They accumulate light energy for some period of time and then report that accu-
mulated energy. In the case of the sensor, the time period is determined by the
shutter opening; in the case of the cell in the eye, the cell sends a signal when
the accumulated light reaches some level, so the frequency of signals is propor-
tional to the arriving light intensity. Simulating such a sensor (or cell) therefore
involves computing an integral of the incoming light over the area of the sensor.
Writing down an analytic solution to this integral for any but the simplest scenes
is impractical. For more interesting scenes, we have to perform numerical integra-
tion. That necessarily introduces some error, but it also opens up a vast array of
computational options for trading quality against time and space. We must approx-
imate the integral by numerical integration, which involves evaluating the inte-
grand at several places (this is called sampling) and then combining these samples
to estimate the overall value. The simplest possible version of this approach is to
evaluate the incoming light at the sensor center only, and multiply this sample by
the area of the sensor to estimate the overall incoming light integral. If the incom-
ing light intensity changes slowly as a function of position, this works quite well;
if it changes rapidly, the single-sample approximation introduces many kinds of
errors.
1.13.4 Image Display
Modern displays typically are divided into small squares called pixels; each small
square 13 is individually addressable and can be told to send out a mix of red, green,
and blue (RGB) light by specifying a triple of numbers ( r , g , b ) , each between 0
and 255. The amount of light emitted from the square is not directly proportional
to the numbers; instead, it follows a relation so that equal differences in numbers
correspond approximately to equal differences in perceived brightness. You've
probably encountered the use of RGB triples in some photo-editing program; typ-
ically the RGB values each occupy a single byte, and hence are represented by
numbers between 0 and 255, and sometimes are written as two-digit hexadeci-
mals. Thus, a color expressed as 0xFF00CC can be read as “Red is FF, which
is 255, there's no green at all, and there's CC worth of blue, which is 204 deci-
mal; that means it's a somewhat reddish purple.” It isn't obvious what any given
color triple or set of hexadecimal values will yield as a hue; color specification is
discussed in Chapter 28.
1.13.5 The Human Visual System
Our eyes respond to light that arrives at the lens, passes through the pupil, and
reaches the cornea. While direct sunlight is almost 10 10 times as bright as the
faint light in a dark bedroom, our eyes can detect and process both, but not at
the same time. In fact, our eyes adapt to the general illumination around us, and
once adapted we can distinguish light intensities that range over a factor of about
1000: The faintest thing we can distinguish from black is presenting light to our
eyes that's about 1
/
1000 the intensity of the thing we perceive as being “as bright
13. The term “pixel” is also used to denote one of the values stored in an image, or a small
physical portion of a sensor. There are subtle differences in these denotations, and you
should not say “a pixel is a little square” [Smi95].
 
 
 
 
Search WWH ::




Custom Search