Graphics Reference
In-Depth Information
hope. Implementation complexity has skyrocketed over the past 50 years despite
(and sometimes because of) graphics middleware libraries and standardization
of certain algorithms. Progress has been very slow outside of photorealism, per-
haps because the quality of nonphotorealistic renderings is evaluated subjectively.
Computer graphics does not today empower the typical user with the expressive
and communicative ability of an artist using natural media.
14.2.2 Legacy Models
Beware that in this chapter we describe both the representations that are preferred
for current practice and some that are less frequently recommended today. Some
of the older techniques make tradeoffs that one might not select intentionally if
designing a system from a blank slate today. That can be because they were devel-
oped early in the history of computer graphics, before certain aspects were well
understood. It can also be because they were developed for systems that lacked
the resources to support a more sophisticated model.
We include techniques that we don't recommend using for two reasons. First,
this chapter describes what you need to know, not what you should do . Classic
graphics papers contain great key ideas surrounded by modeling artifacts of their
publication date. You need to understand the modeling artifacts to separate them
from the key ideas. Graphics systems contain models needed to support legacy
applications, such as Gouraud interpolation of per-vertex lighting in OpenGL. You
will encounter and likely have to help maintain such systems and can't abandon
the past in practice.
Second, out-of-fashion ideas have a habit of returning in systems. As we
discussed in this section, the best model for an application is rarely the most
accurate—there are many factors to be considered. The relative costs of address-
ing these are highly dynamic. One source of change in cost is due to algorithmic
discoveries. For example, the introduction of the fast Fourier transform, the rise
of randomized algorithms, and the invention of shading languages changed the
efficiency and implementation complexity of major graphics algorithms. Another
source of change is hardware. Progress in computer graphics is intimately tied to
the “constant factors” prescribed by the computers of the day, such as the ratio
of memory size to clock speed or the power draw of a transistor relative to bat-
tery capacity. When technological or economic factors change these constants, the
preferred models for software change with them. When real-time 3D computer
graphics entered the consumer realm, it adopted models that the film industry had
abandoned a decade earlier as too primitive. A film industry server farm could
bring thousands of times more processing and memory to bear on a single frame
than a consumer desktop or game console, so that industry faced a very different
quality-to-performance tradeoff. More recently the introduction of 3D graphics in
mobile form factors again resurrected some of the lower-quality approximations.
14.3 Real Numbers
An implicit assumption in most computer science is that we can represent real
numbers with sufficient accuracy for our application in digital form. In graph-
ics we often find ourselves dangerously close to the limit of available precision,
and many errors are attributable to violations of that assumption. So, it is worth
 
 
 
Search WWH ::




Custom Search