Graphics Reference
In-Depth Information
Figure 35.13: Undersampling in time with regular (a.k.a. uniform, statistically dependent)
samples within each pixel (left) produces ghosting. Undersampling with stochastic (a.k.a.
independent, random) samples produces noise (right) [AMMH07]. (Courtesy of Jacob
Munkberg and Tomas Akenine-Möller)
tracking. However, all live-action film faces exactly this problem and rarely do
viewers experience disorientation at the fact that the images have been preinte-
grated over time for their eyes. If the director does a good job of directing the
viewer's attention, the camera will be tracking the object of primary interest in
the same way that the eye would. Presumably, a poorly directed film fails at this
and creates some disorientation, although we are not aware of a specific scien-
tific study of this effect. Similar problems arise with defocus due to limited depth
of field and with stereoscopic 3D. For interactive 3D rendering all three effects
present a larger challenge because it is hard to control or predict attention in an
interactive world. Yet in recent years, games in particular have begun to exper-
iment with these effects and achieve some success even in the absence of eye
tracking.
Antialiasing, motion blur, and defocus are all cases of integrating over a larger
sampling area than a single point to produce synthetic images that more closely
resemble those captured by a real camera. Many rendering algorithms combine
these into a “5D” renderer, where the five dimensions of integration are subpixel
x , y , time, and lens u , v . Cook et al.'s original distribution and stochastic ray trac-
ing schemes [CPC84, Coo86] can be extended to statistically dependent temporal
samples per pixel by simply rendering multiple frames and then averaging the
results. Because all samples in each frame are at the same time, for large motions
this produces discrete “ghosts” for fast-moving objects instead of noisy ghosts, as
shown in Figure 35.13. Neither of these is ideal—the image has been undersam-
pled in time and each is a form of aliasing. The advantage of averaging multiple
single-time frames is that any renderer, including a rasterization renderer, can be
trivially extended to simulate motion blur in this method.
The Reyes micropolygon rendering algorithm [CCC87] that has been heav-
ily used for film rendering is a kind of stochastic rasterizer. It takes multiple
temporal samples during rasterization to produce effects like motion blur, avoiding
the problem of dependent time samples. Akenine-Möller et al. [AMMH07]
introduced explicit temporal stochastic rasterization for triangles, and Fatahalian
 
Search WWH ::




Custom Search