Graphics Reference
In-Depth Information
Why is stability so difficult to achieve in dynamics? As we've seen in this
chapter, dynamics is deeply related to ordinary differential equations and numer-
ical integration. A numerical integrator's precision and accuracy are governed by
three elements: its representation precision (e.g., floating-point quaternions), its
approximation method for derivatives, and its advancement (stepping and inte-
gration) method. Each of these is a complex source of error. When integrating
forward in time, the feed-forward nature of the system creates positive feedback
that tends to amplify errors. This is what often creates additional energy and is
a major cause of instability. Solving via a noncausal process (i.e., looking both
forward and backward in time) can counter the positive feedback loop with corre-
sponding negative feedback, which may increase stability. Unfortunately, solving
a system for all time generally precludes interactivity, in which future inputs are
unknown.
In contrast to dynamics, physical simulation of light transport by various algo-
rithms typically converges to a stable result. Energy in light differs from that in
matter in three ways: Photons do not interact with one another (at macroscopic
scales, at least), energy strictly decreases at interactions along a transport path
through time and space (in everyday scenarios, at least), and the energy of light is
independent of its position between interactions. The last point of comparison is
the subtlest: Kinetic energy is explicit in a dynamics solver, but potential energy
is largely hidden from the integrator and therefore a place in which error easily
accumulates.
Thus, although algorithms like radiosity and photon mapping simulate light
exclusively forward (or backward) in time, the feedback in the light transport
integrator is not amplified by the underlying physics. In contrast, in dynamics, the
laws of mechanics and the hidden potential energy repository conspire to amplify
error and instability. Even worse, this error is often not proportional to precision,
so one's first efforts at addressing it by increasing precision are often insufficient.
For example, turning all 32-bit floating-point values into 64-bit ones or halving
the integration time step often doubles simulation cost without “doubling” stabil-
ity. As a result, many practical dynamics simulators are rife with scene-specific
constants controlling bias, energy restitution, and constraint weights. Manual tun-
ing of these constants is tricky and often unsatisfying, since a loss of accuracy is
frequently the price of stability.
Thus, instability is an inherent problem in dynamics for interactive systems.
Stability is a primary criterion for evaluating dynamics systems and algorithms,
and much good work has been done on it in both industry and academia. Looking
toward the future, we offer two speculations on stability.
The first speculation is that the structure of the integrator may be at least as
important as the schemes it uses for derivatives and steps, and that this may be
a fruitful area in which to seek improvements. This statement is motivated by
Guendelman, Bridson, and Fedkiw's work on stability for stacks of rigid bodies
[GBF03]. They showed that simply reordering steps in the inner loop of the inte-
grator can dramatically increase stability for scenarios that have been traditionally
challenging to simulate, and then demonstrated some additional sorting methods
for enhancing stability further. This inspired others to experiment with the integra-
tion loop, and has led to various minor changes that produced significant increases
in the stability of popular dynamics simulators. This area merits further study.
The second speculation is that the conventional wisdom, “fourth-order Runge-
Kutta with fixed time steps is good enough” for dynamics in computer graphics,
 
Search WWH ::




Custom Search