Graphics Reference
In-Depth Information
explicitly considering how we approximate real numbers before we build more
interesting data structures that use them.
Fixed point, normalized fixed point, and floating point are the most perva-
sive approximations of real numbers employed in computer graphics programs.
Each has finite precision, and error tends to increase as more operations are per-
formed. When the precision is too low for a task, surprising errors can arise. These
are often hard to debug because the algorithmmay be correct—for real numbers—
so mathematical tests will yield seemingly inconsistent results. For example, con-
sider a physical simulation in which a ball approaches the ground. The simulator
might compute that the ball must fall d meters to exactly contact the ground. It
advances the ball d
0. 0001 meters, on the assumption that this will represent
the state of the system immediately before the contact. However, after that trans-
formation, a subsequent test reveals that the ball is in fact partly underneath the
ground. This occurs because mathematically true statements, such as d = d
a + a
(and especially, a =( a
b ), may not always hold for a particular approxima-
tion of real numbers. This is compounded by optimizing compilers. For example,
a = b + c ; e = a + d may yield a different result than e = b + c + d due to dif-
fering intermediate precision, and even if you write the former, your optimizing
compiler may rewrite it as the latter. Perhaps the most commonly observed preci-
sion artifact today is self-shadowing “acne” caused by insufficient precision when
computing the position of a point in the scene independently relative to the camera
and to the light. When these give different results with an error in one direction,
the point casts a shadow on itself. This manifests as dark parallel bands and dots
across surfaces.
More exotic, and potentially more accurate, representations of real numbers
are available than fixed and floating point. For example, rational numbers can be
accurately encoded as the ratio of two bignums (i.e., dynamic bit-length integers).
These rational numbers can be arbitrarily close approximations of real numbers,
provided that we're willing to spend the space and time to operate on them. Of
course, we are seldom willing to pay that cost.
/
b )
14.3.1 Fixed Point
Fixed-point representations specify a fixed number of binary digits and the loca-
tion of a decimal point among those digits. They guarantee equal precision inde-
pendent of magnitude. Thus, we can always bound the maximum error in the rep-
resentation of a real number that lies within the representable range. Fixed point
leads to fairly simple (i.e., low-cost) hardware implementation because the imple-
mentation of fixed-point operations is nearly identical to that of integer operations.
The most basic form is exact integer representation, which almost always uses the
two's complement scheme for efficiently encoding negative values.
Fixed-point representations have four parameters: signed or unsigned, normal-
ized or not, number of integer bits, and number of fractional bits. The latter two
are often denoted using a decimal point. For example, “24.8 fixed point format”
denotes a fixed-point representation that has 32 bits total, 24 of which are devoted
to the integer portion and eight to the fractional portion.
An unsigned normalized b -bit fixed-point value corresponding to the integer
2 b
( 2 b
0
1 ) , that is, on the range
[0, 1]. A signed normalized fixed-point value has a range of [
x
1 is interpreted as the real number x
/
1, 1 ] . Since direct
mapping of the range [ 0, 2 b
1 ] to [
1, 1 ] would preclude an exact representation
 
 
Search WWH ::




Custom Search