Java Reference
In-Depth Information
Exponent
The exponent takes 8 bits. The exponent can be positive or negative. The range of the exponent value that can be
stored in 8-bit is -127 to 128. There must be a mechanism to represent the sign of the exponent. Note that the 1-bit
sign field in the layout shown in Table 3-7 stores the sign of the floating-point number, not the sign of the exponent.
To store the sign of the exponent, you can use the sign-magnitude method, where 1 bit is used to store the sign and
the remaining 7 bits store the magnitude of the exponent. You can also use 2's complement method to store the
negative exponent as is used to store integers. However, IEEE does not use either of these two methods for storing the
exponent. IEEE uses the biased representation of the exponent to store the exponent value.
What is a bias and what is a biased exponent? A bias is a constant value, which is 127 for IEEE 32-bit single-precision
format. The bias value is added to the exponent before storing it in memory. This new exponent, after adding a bias, is
called a biased exponent. The biased exponent is computed as follows:
Biased Exponent = Exponent + Bias
For example, 19.25 can be written in normalized binary floating-point format as 1.001101 x 2 4 . Here, the
exponent value is 4. However, the exponent value stored in memory will be a biased exponent, which will be
computed as follows:
Biased Exponent = Exponent + Bias
= 4 + 127 (Single-precision format)
= 131
For 1.001101 x 2 4 , 131 will be stored as the exponent. When reading back the exponent of a binary floating-point
number, you must subtract the bias value for that format to get the actual exponent value.
Why does IEEE use biased exponent? The advantage of using a biased exponent is that positive floating-point
numbers can be treated as integers for comparison purposes.
Suppose E is the number of bits used to store the exponent value in a given floating-point format. The value of the
bias for that format can be computed as
Bias = 2 (E - 1) - 1
The exponent ranges from -127 to 128 for the single-precision format. Therefore, the biased exponent ranges
from 0 to 255. Two extreme exponent values (-127 and 128 for unbiased, and 0 and 255 for biased) are used to
represent special floating-point numbers, such as zero, infinities, NaNs and denormalized numbers. The exponent
range of -126 to 127 (biased 1 to 254) is used to represent normalized binary floating-point numbers.
Significand
IEEE single-precision floating-point format uses 23 bits to store the significand. The number of bits used to store the
significand is called the precision of that floating-point format. Therefore, you might have guessed that the precision
of floating-point numbers stored in the single-precision format is 23. However, this is not true. But first I need to
discuss the format in which the significand is stored before I conclude about the precision of this format.
The significand of a floating-point number is normalized before it is stored in memory. The normalized
significand is always of the form 1.fffffffffffffffffffffff . Here, an f denotes a bit 0 or 1 for the fractional part of
significand. Because the leading 1 bit is always present in the normalized form of significand, you need not store the
leading 1 bit. Therefore, while storing the normalized significand, you can use all 23 bits to store the fractional part of
the significand. In fact, not storing the leading 1 bit of a normalized significand gives you one extra bit of precision.
This way, you represent 24 digits (1 leading bit + 23 fraction bits) in just 23 bits. Thus, for normalized significand the
precision of a floating-point number in IEEE single-precision format is 24.
 
Search WWH ::




Custom Search