Graphics Reference
In-Depth Information
0
Figure 11.2 Floating-point numbers are not evenly spaced on the number line. They are
denser around zero (except for a normalization gap immediately surrounding zero) and
become more and more sparse the farther from zero they are. The spacing between succes-
sive representable numbers doubles for each increase in the exponent.
number range corresponding to the exponent of k
1 has a spacing twice that of
the number range corresponding to the exponent of k . Consequently, the density of
floating-point numbers is highest around zero. (The exception is a region immedi-
ately surrounding zero, where there is a gap. This gap is caused by the normalization
fixing the leading bit to 1, in conjunction with the fixed range for the exponent.) As
an example, for the IEEE single-precision format described in the next section there
are as many numbers between 1.0 and 2.0 as there are between 256.0 and 512.0.
There are also many more numbers between 10.0 and 11.0 than between 10000.0 and
10001.0 (1048575 and 1023, respectively).
Compared to fixed-point numbers, floating-point numbers allow a much larger
range of values to be represented at the same time good precision is maintained
for small near-zero numbers. The larger range makes floating-point numbers more
convenient to work with than fixed-point numbers. It is still possible for numbers to
become larger than can be expressed with a fixed-size exponent. When the floating-
point number becomes too large it is said to have overflowed . Similarly, when the
number becomes smaller than can be represented with a fixed-size exponent it is
said to have underflowed . Because a fixed number of bits is always reserved for the
exponent, floating-point numbers may be less precise than fixed-point numbers for
certain number ranges. Today, with the exception of some handheld game consoles
all home computers and game consoles have hardware-supported floating-point
arithmetic, with speeds matching and often exceeding that of integer and fixed-point
arithmetic.
As will be made clear in the next couple of sections, floating-point arithmetic does
not have the same properties as has arithmetic on real numbers, which can be quite
surprising to the uninitiated. To fully understand the issues associated with using
floating-point arithmetic, it is important to be familiar with the representation and
its properties. Today, virtually all platforms adhere (or nearly adhere) to the IEEE-754
floating-point standard, discussed next.
+
11.2.1 The IEEE-754 Floating-point Formats
The IEEE-754 standard, introduced in 1985, is today the de facto floating-point
standard. It specifies two basic binary floating-point formats: single-precision and
double-precision floating-point numbers.
Search WWH ::




Custom Search