Hardware Reference
In-Depth Information
5. To add two floating-point numbers, you must adjust the exponents (by shifting the
fraction) to make them the same. Then you can add the fractions and normalize the re-
sult, if need be. Add the single-precision IEEE numbers 3EE00000H and 3D800000H
and express the normalized result in hexadecimal.
6. The Tightwad Computer Company has decided to come out with a machine having
16-bit floating-point numbers. The Model 0.001 has a floating-point format with a
sign bit, 7-bit, excess 64 exponent, and 8-bit fraction. The Model 0.002 has a sign bit,
5-bit, excess 16 exponent, and 10-bit fraction. Both use radix 2 exponentiation. What
are the smallest and largest positive normalized numbers on each model? About how
many decimal digits of precision does each have? Would you buy either one?
7. There is one situation in which an operation on two floating-point numbers can cause a
drastic reduction in the number of significant bits in the result. What is it?
8. Some floating-point chips have a square root instruction built in. A possible algorithm
is an iterative one (e.g., Newton-Raphson). Iterative algorithms need an initial approx-
imation and then steadily improve it. How can one obtain a fast approximate square
root of a floating-point number?
9. Write a procedure to add two IEEE single-precision floating-point numbers. Each
number is represented by a 32-element Boolean array.
10. Write a procedure to add two single-precision floating-point numbers that use radix 16
for the exponent and radix 2 for the fraction but do not have an implied 1 bit to the left
of the binary point. A normalized number has 0001, 0010, ..., 1111 as the leftmost 4
bits of the fraction, but not 0000. A number is normalized by shifting the fraction left
4 bits and subtracting 1 from the exponent.
Search WWH ::




Custom Search