Java Reference
In-Depth Information
decimal digits is approximate because the mantissa is binary, not decimal, and there's not an exact mapping
between binary and decimal digits.
Suppose you have a large number such as 2,134,311,179. How does this look as a floating-point number?
Well, as a decimal representation of a number of type float it looks like:
0.2134311E10
It's not quite the same. You have lost three low-order digits, so you have approximated our original value
as 2,134,311,000. This is a small price to pay for being able to handle such a vast range of numbers, typically
from 10− 38 to 10+ 38 either positive or negative, as well as having an extended representation that goes from
a minute 10− 308 to a mighty 10+ 308 . As you can see, they are called floating-point numbers for the fairly
obvious reason that the decimal point “floats,” depending on the exponent value.
Aside from the fixed precision limitation in terms of accuracy, there is another aspect you may need to be
conscious of. You need to take great care when adding or subtracting numbers of significantly different mag-
nitudes. A simple example demonstrates the kind of problem that can arise. You can first consider adding
.365E−3 to .365E+7. You can write this as a decimal sum:
.000365 + 3,650,000
This produces the result:
3,650,000.000365
which when converted back to floating-point with seven-digit accuracy becomes
.3650000E+7
So you might as well not have bothered. The problem lies directly with the fact that you carry only seven-
digit precision. The seven digits of the larger number are not affected by any of the digits of the smaller
number because they are all farther to the right. Oddly enough, you must also take care when the numbers
are very nearly equal. If you compute the difference between such numbers you may end up with a result
that has only one or two digits' precision. It is quite easy in such circumstances to end up computing with
numbers that are total garbage.
One final point about using floating-point values — many values that have an exact representation as a
decimal value cannot be represented exactly in binary floating-point form. For example, 0.2 as a decimal
value cannot be represented exactly as a binary floating-point value. This means that when you are working
with such values, you have tiny errors in your values right from the start. One effect of this is that accumu-
lating the sum of 100 values that are all 0.2 will not produce 20 as the result. If you try this out in Java, the
result is 20.000004, slightly more than you bargained for. (Unfortunately the banks do not use floating-point
values for your deposits — it could have been a sure-fire way to make money without working.)
You can conclude from this that although floating-point numbers are a powerful way of representing a
very wide range of values in your programs, you must always keep in mind their limitations. If you are con-
scious of the range of values that you are likely to be working with, you can usually adopt an approach to
performing the calculations that you need that avoids the sorts of problems I have described. In other words,
if you keep the pitfalls in mind when working with floating-point values, you have a reasonable chance of
stepping around or over them.
Search WWH ::




Custom Search