Graphics Reference
In-Depth Information
broadcast first, followed by the top frame of the right-center column, followed by
the second-to-top row of the left-center column, etc.
The interlaced frames are chosen by repeating even frames from the original
film twice and odd frames from the film three times; that is, odd to even appear in
the ratio 3:2. Notice how in the center columns the even source frames A and C
appear twice and the odd source frames B and D appear three times.
35.3.4 Temporal Aliasing and Motion Blur
Rendering a frame from a single instant in time is convenient because all geometry
can be considered static for the duration of the frame. Each pixel value in an image
represents an integral over a small amount of the image plane in space and a small
amount of time called the shutter time or exposure time. Film cameras contained
a physical shutter that flipped or irised open for the exposure time. Digital cameras
typically have an electronic shutter. For a static scene, the measured energy will
be proportional to the exposure time. A virtual camera with zero exposure can be
thought of as computing the limit of the image as the exposure time approaches
zero.
There are reasons to favor both long and short exposure times in real cameras.
In a real camera, short exposure times lead to noise. For moderately short expo-
sure times (say, 1
100 s) under indoor lighting, background noise on the sensor
may become significant compared to the measured signal. For extremely short
exposure times (say, 1
/
10,000 s), there also may not be enough photons inci-
dent on each pixel to smooth out the result. Nature itself uses discrete sampling
because photons are quantized. In computer graphics we typically consider the
“steady state” of a system under large numbers of photons, but this model breaks
down for very short measurement intervals. A long exposure avoids these noise
problems but leads to blur. For a dynamic scene or camera, the incident radiance
function is not constant on the image plane during the exposure time. The resultant
image integrates the varying radiance values, which manifest as objects blurring
proportional to their image space velocity. Small camera rotations due to a shaky
hand-held camera result in an entirely blurry image, which is undesirable. Like-
wise, if the screen space velocity of the subject is nonzero, the subject will appear
blurry. This motion blur can be a desirable effect, however. It conveys speed. A
very short exposure of a moving car is identical to that of a still car, so the observer
cannot judge the car's velocity. For a long exposure, the blur of the car indicates
its velocity. If the car is the subject of the image, the photographer might choose
to rotate the camera to limit the car to zero screen-space velocity. This blurs the
background but keeps the car sharp, thus maintaining both a sharp subject and the
velocity cue.
For rendering, our photons are virtual and there is no background noise, so a
short exposure does not produce the same problems as with a real camera. How-
ever, just as taking only one spatial sample per pixel results in aliasing, so does
taking only one temporal sample. The top row of Figure 35.12 shows two images
of a very thin row of bars, as on a cage. If we take only one spatial sample per
pixel, say, at the pixel center, then for some subpixel camera offsets the bars are
visible and for others they are invisible. Note that as the spatial sampling den-
sity increases, the bars can be resolved at any position. The bottom row shows
the result of the equivalent experiment performed for temporal samples. A fast-
moving car is driving past the camera in the scene depicted. For a single temporal
/
 
 
Search WWH ::




Custom Search