Graphics Reference
In-Depth Information
Coo86, Cro77, Mit87, Mit96, KCODL06, Bri07, dGBOD12]. This work is closely
tied to research on white noise random number generation [dGBOD12].
One drawback to using screen-space patterns is the screen door effect
(Chapter 34). A static pseudorandom pattern in screen space is not perceptible in
a single image, but when the pattern is held fixed in screen space and the view or
scene is dynamic, that pattern becomes perceptible. It looks as if the scene were
being viewed through a screen door. This is easy to see in the real world. Hold
your head still and look through a window (or slightly dirty eyeglasses). The glass
is largely invisible, but on moving your head imperfections in and dirt on the glass
are accentuated. This is because your visual system is trying to enforce temporal
coherence on the objects seen through the glass. Their appearance is changing in
time because of the imperfections in the glass in front of them, so you are able to
perceive those imperfections.
Fortunately, when the sampling pattern resolution falls below the resolution of
visual acuity, the perception of the screen-door effect is minimal. This is exploited
by supersampling and alpha-to-coverage transparency, which operate below the
pixel scale and are therefore inherently close to the smallest discernible feature
size. Dithering works well when the image is static or the pixels are so small as
to be invisible, but it produces a screen-door effect for animations rendered to a
display with large pixels.
It is challenging to stamp patterns in continuous spaces; for example, a ray
tracer or photon mapper's global illumination scattering samples. Here, replacing
pseudorandom sampling with sampling based on a hash of the sample location
is more appropriate. By their very nature, hash functions tend to map nearby
inputs to disparate outputs, so this only maintains coherence for static scenes
with dynamic cameras. For dynamic scenes, a spatial noise function is prefer-
able [Per85] because it is itself spatially coherent, yet pseudorandom.
Figure 35.14: The Dynamic
Canvas algorithm [CTP + 03]
produces background-paper
detail at multiple scales that
transform evocatively under 3D
camera motion. (Courtesy of
Joelle Thollot, “Dynamic Canvas
for Non-Photorealistic Walk-
throughs,” by Matthieu Cunzi,
Joelle Thollot, Sylvain Paris,
Gilles Debunne, Jean-Dominique
Gascuel and Fredo Durand, Pro-
ceedings of Graphics Interface
2003.)
Another approach to increasing temporal coherence of sample points is to
begin with an arbitrary sample set and then move the samples forward in time,
adding and removing samples as necessary. This approach is employed frequently
for nonphotorealistic rendering. The Dynamic Canvas [CTP + 03] algorithm
(Figure 35.14) renders the background paper texture for 3D animations rendered
in the style of natural media. A still frame under this algorithm appears to be,
for example, a hand-drawn 3D sketch on drawing paper. As the viewer moves for-
ward, the paper texture scales away from the center to avoid the screen-door effect
and give a sense of 3D motion. As the viewer rotates, the paper texture trans-
lates. The algorithm overlays multiple frequencies of the same texture to allow for
infinite zoom and solves an optimization problem for the best 2D transformation
to mimic arbitrary 3D motion. The initial 2D transformation is arbitrary, and at
any point in the animation the transformation is determined by the history of the
viewer's motion, not the absolute position of the viewer in the scene.
Another example of moving samples is brushstroke coherence, of the style
originally introduced for graftals [MMK + 00] (see Figure 35.15). Graftals are
scene-graph elements corresponding to strokes or collections of strokes for small-
detail objects, such as tree leaves or brick outlines. A scene is initially rendered
with some random sampling of graftals; for example, leaves at the silhouettes of
trees. Subsequent frames reuse the same graftal set. When a graftal has moved
too far from the desired distribution due to viewer or object motion, it is replaced
with a newly sampled graftal. For example, as the viewer orbits a tree, graftals
moving toward the center of the tree in the image are replaced with new graftals
Figure 35.15: The view-
dependent tufts on the trees,
grass, and bushes are rendered
with graftals that move coher-
ently between adjacent frames
of camera animation. (Courtesy
of the Brown Graphics Group,
©2000 ACM, Inc. Reprinted by
permission.)
 
Search WWH ::




Custom Search