Graphics Reference
In-Depth Information
renderer is in effect always rendering the first frame, and most film rendering is
limited by the cost of fetching assets across a network and computing intermediate
results that likely vary little from those computed on adjacent nodes.
35.3.7 The Burden of Temporal Coherence
When rendering an animation where the frames should exhibit temporal coher-
ence, an algorithm has the burden of maintaining that coherence. This burden is
unique to animation and arises from human perception.
The human visual system is very sensitive to change. This applies not only to
spatial changes such as edges, but also to temporal changes as flicker or motion.
Artifacts that contribute little perceptual error to a single image can create large
perceptual error in an animation if they create a perception of false motion.
Four examples are “popping” at level-of-detail changes for geometry and texture,
“swimming jaggies” at polygon edges, dynamic or screen-door high-frequency
noise, and distracting motion of brushstrokes in nonphotorealistic rendering,
Popping occurs when a surface transitions between detail levels. Because
immediately before and immediately after a level-of-detail change either level
would produce a reasonable image, the still frames can look good individually but
may break temporal coherence when viewed sequentially. For geometry, blend-
ing between the detail levels by screen-space compositing, subdivision surface
methods (see Chapter 23), or vertex animation can help to conceal the transition.
Blending the final image ensures that the final result is actually coherent, whereas
even smoothly blending geometry can cause lighting and shadows to still change
too rapidly. However, blending geometry guarantees the existence of a true surface
at every frame. Image compositing results in an ambiguous depth buffer or surface
for global illumination purposes. For materials, trilinear interpolation (see Chap-
ter 20) is the standard approach. This generates continuous transitions and allows
tuning for either aliasing (blurring) or noise. A drawback of trilinear interpola-
tion is that it is not appropriate for many expressions, for example, unit surface
normals.
Sampling a single ray per pixel produces staircase “jaggies” along the edges of
polygons. These are unattractive in a still image, but they are worse in animation
where they lead to a false perception of motion along the edge. The solution here
is simple: antialiasing, either by taking multiple samples per pixel or through an
analytic measure of pixel coverage.
High-frequency, low-intensity noise is rarely objectionable in still images.
This property underlies the success of half-toning and dithering approaches to
increasing the precision of a fixed color gamut. However, if a static scene is ren-
dered with noise patterns that change in each frame, the noise appears as static
swimming over the surfaces in the scene and is highly objectionable.
The problem of dynamic noise patterns arises from any stochastic sampling
algorithm. In addition to dithering, other common algorithms that are susceptible
to problems here include jittered primary rays in a ray tracer and photons in a
photon mapper. Three ways to avoid this kind of artifact are making the sampling
pattern static, using a hash function, and slowly adjusting the previous frame's
samples.
Supersampling techniques for antialiasing often rely on the static pattern
approach. This can be accomplished by stamping a specific pattern in screen
space. There has been significant research into which patterns to use [GS89,
 
 
Search WWH ::




Custom Search