Hardware Reference
In-Depth Information
Appendix B: Number System Issue
B.1 Introduction
Computers were initially designed to deal with numerical data. Due to the on-and-off
nature of electricity, numbers were represented in binary base from the beginning of the elec-
tronic computer age.
However, the binary number system is not natural to us because human beings have
been mainly using the decimal number system for thousands of years. A binary number
needs to be converted to a decimal number before it can be quickly interpreted by a human
being.
For identification, a subscript 2, 8, or 16 is added to a number to indicate the base of the
given number. For example, 101 2 is a binary number; 234 8 is an octal number, and 2479 16 is a
hexadecimal number. No subscript is used for a decimal number.
B.2 Converting from Binary to Decimal
A binary number is represented by two symbols: 0 and 1. Here, we refer to 0 and 1 as binary
digits . A binary digit is also referred to as a bit . In the computer, 8 bits are referred to as a byte .
Depending on the computer, either 16 bits or 32 bits are referred to as a word . The values of
0 and 1 in the binary number system are identical to their counterparts in the decimal number
system. To convert a binary number to a decimal, we compute a weighted sum of every binary
digit contained in the binary number. If a specific bit is k bits from the rightmost bit, then its
weight is 2 k . For example,
10100100 2 5 2 7 1 2 5 1 2 2 5 128 1 32 1 4 5 164
11011001 2 5 2 7 1 2 6 1 2 4 1 2 3 1 2 0 5 128 1 64 1 16 1 8 1 1 5 217
10010010.101 2 5 2 7 1 2 4 1 2 1 1 2 2 1 1 2 2 3 5 146.625
B.3 Converting from Decimal to Binary
A decimal integer can be converted to a binary number by performing the repeated division-
by-2 operation until the quotient becomes 0. The remainder resulting from the first division is
the least significant binary digit whereas the remainder resulting from the last division is the
most significant digit.
 
Search WWH ::




Custom Search