Information Technology Reference
In-Depth Information
Figure 2.43: Small numbers in a long wordlength system have inefficient leading zeros (a). Floating-point coding
(b) is more efficient, but can lead to inaccuracy.
In order to convert a binary number of arbitrary value with an arbitrarily located radix point into floating-point
notation, the position of the most significant or leading one and the position of the radix point are noted. The
number is then multiplied or divided by powers of two until the radix point is immediately to the right of the leading
one. This results in a value known as the mantissa (plural: mantissae) which always has the form 1.XXX . . . where
X is 1 or 0 (known in logic as 'don't care').
The exponent is a two's complement code which determines whether the mantissa has to be multiplied by positive
powers of two which will shift it left and make it bigger, or whether it has to be multiplied by negative powers of two
which will shift it right and make it smaller.
In floating-point notation, the range of the numbers and the precision are independent. The range is determined by
the wordlength of the exponent. For example, a six-bit exponent having 64 values allows a range from 1.XX x 2 31 to
1.XX x 2 -32 . The precision is determined by the length of the mantissa. As the mantissa is always in the format
1.XXX it is not necessary to store the leading one so the actual stored value is in the form .XXX. Thus a ten-bit
mantissa has eleven-bit precision. It is possible to pack a ten-bit mantissa and a six-bit exponent in one sixteen-bit
word.
Although floating-point operation extends the number range of a computer, the user must constantly be aware that
floating point has limited precision. Floating point is the computer's equivalent of lossy compression. In trying to get
more for less, there is always a penalty.
In some signal-processing applications, floating-point coding is simply not accurate enough. For example, in an
audio filter, if the stopband needs to be, say, 100 dB down, this can only be achieved if the entire filtering arithmetic
has adequate precision. 100 dB is one part in 100 000 and needs more than sixteen bits of resolution. The poor
quality of a good deal of digital audio equipment is due to the unwise adoption of floating- point processing of
inadequate precision.
Computers of finite wordlength can operate on larger numbers without the inaccuracy of floating-point coding by
using techniques such a double precision . For example, thirty-two-bit precision data words can be stored in two
adjacent memory locations in a sixteen-bit machine, and the processor can manipulate them by operating on the
two halves at different times. This takes longer, or needs a faster processor.
2.20 Multiplexing principles
Multiplexing is used where several signals are to be transmitted down the same channel. The channel bit rate must
be the same as or greater than the sum of the source bit rates. Figure 2.44 shows that when multiplexing is used,
the data from each source has to be time compressed. This is done by buffering source data in a memory at the
multiplexer. It is written into the memory in real time as it arrives, but will be read from the memory with a clock
which has a much higher rate. This means that the readout occurs in a smaller timespan. If, for example, the clock
frequency is raised by a factor of ten, the data for a given signal will be transmitted in a tenth of the normal time,
leaving time in the multiplex for nine more such signals.
 
Search WWH ::




Custom Search