Graphics Reference
In-Depth Information
Chapter 11
Numerical Robustness
An issue that has only had sporadic coverage so far is that of algorithm robustness .
Formal definitions of robustness are possible. As briefly discussed in Chapter 2, in this
topic robustness is used simply to refer to a program's capacity to deal with numerical
computations and geometrical configurations that are in some way difficult to handle.
A robust program does not crash or get into infinite loops because of numerical errors.
A robust algorithm returns consistent results for equivalent inputs and deals with
degenerate situations in an expected way. A robust algorithm can still fail, but it fails
gracefully, returning results consistent with the global state of the algorithm.
Collision detection code is particularly sensitive to robustness problems due to
the numerical and geometrical nature of the algorithms involved. Specifically, solid
objects simply must not pass through each other or randomly fall through the ground.
Even so, there are still many reasons seemingly random fall-through may occur in
practice. This chapter discusses the numerical origins of these problems and means of
minimizing or even fully preventing them. Some geometrical sources of robustness
problems are also discussed.
11.1 Robustness Problem Types
For the type of computations used in collision detection applications, there are
two main categories of robustness problems: those due to precision inaccuracies and
those due to degeneracies . Precision problems are caused by the fact that calculations
on a computer are not performed with exact real arithmetic but limited-precision
floating-point or integer arithmetic. Degeneracies, in short, are special-case values
or conditions occurring during execution the algorithm in question was not designed
to handle.
Often algorithms are designed under the assumptions that all computations
are performed in exact real arithmetic and that no degeneracies exist in the
427
Search WWH ::




Custom Search