Information Technology Reference
In-Depth Information
that the hardware can “understand” and execute. These core operations are
called the “instruction set.” In terms of implementation the simplest instruc-
tions are hard-wired in the processor's circuitry and more complex ones are
constructed from the core instructions as we describe later in this chapter. For
the program to work the instructions must be arranged in the right order and
expressed in an unambiguous way.
A computer's need for such exact directions contrasts with our everyday
experience when we give instructions to people. When we ask a person to do
something, we usually rely on some unspoken context, such as our previous
knowledge of the person or the individual's familiarity with how things work,
so that we do not need to specify exactly what we want to happen. We depend
on the other person to fill in the gaps and to understand from experience how
to deal with any ambiguities in our request. Computers have no such under-
standing or knowledge of any particular person, how things work, or how to
figure out the solution of the problem. Although some computer scientists
believe that computers may one day develop such “intelligence,” the fact is
that today we still need to specify exactly what we want the computer to do in
mind-numbing detail.
When we write a program to do some specific task, we can assume that our
hardware “understands” how to perform the elementary arithmetic and logic
operations. Writing the precise sequence of operations required to complete
a complicated task in terms of basic machine instructions at the level of logic
gates would make for a very long program. For example, for numerical calcu-
lations we know we have an “add” operation in our basic set of instructions.
However, if we have not told the machine how to carry out a multiply oper-
ation, it will not be able to do a simple calculation such as “multiply 42 by 3.”
Even though for us it is obvious that we can multiply 42 by 3 by using multiple
additions, the computer cannot make this leap of imagination. In this sense,
the computer is “stupid” - it is unable to figure out such things for itself. In
compensation, however, our stupid computer can add far faster than we can!
If we are trying to solve a problem that involves a numerical calculation
requiring several multiplications, it will obviously simplify our program if we
introduced a separate “multiply” instruction for the computer. This instruction
will just be a block of additions contained in a program. Now when we give the
computer the instruction “multiply 42 by 3,” the machine can recognize the
word multiply and can start running this program. This ability to construct com-
pound operations from simple ones and introduce a higher level of abstraction
in this way is one of the fundamental principles of computer science. Moving
up this ladder of abstraction saves us from having to write our programs using
only the most elementary operations.
In the early days of computing, writing programs was not considered dif-
ficult. Even von Neumann thought that programming was a relatively simple
task. We now know that writing correct programs is hard and error prone. Why
did early computer scientists so badly underestimate the difficulty of this task?
There are at least two possible explanations. The pioneers of computing were
mostly male engineers or mathematicians who rarely spent time on the nitty-
gritty details of actually writing programs. Like many men of the time, the early
hardware pioneers thought that the actual coding of programs using binary
Search WWH ::




Custom Search