Information Technology Reference
In-Depth Information
correct answers, but also obtain those results in a timely manner using
resources that are appropriate and available for the job.
Coding Algorithms: Once an algorithm is chosen, it must be writ
ten in a form that a computer can interpret. Computers work by fol
lowing instructions that they are given, and any algorithm must be
translated into a form that uses only the special instructions that the
computer can follow. This translation process is called coding or
programming , and the end of this coding is a computer program. (In
the movie 2001 , the computer Hal responds to instructions given in
English and to visual commands. Such understanding is commonly
seen in movies. Realistically, however, computer understanding of
natural language is still far from being realized, and the goal of
much research in computer science is to be able to allow the auto
matic translation of speech or vision into a form that can be used by
computers.) At a basic level, all information and instructions inside
computers must be in a very primitive form, and considerable work
is usually necessary to translate algorithms from English or another
form understandable to people into a form a computer can inter
pret. We will discuss algorithms in more detail in Chapter 7.
Testing and Running Programs: After an algorithm has been
translated into a form a computer can use, one would hope that the
program could be used to solve the specific problem it was designed
to solve. If data from the specified problem were entered into the
computer, the computer should produce the desired results. This
step comprises the running of a program.
Although the initial running of a program has been known to pro
duce helpful and correct results, it is usually the case that errors will
occur somewhere in the problemsolving process. Specifications may
be incomplete or inaccurate, algorithms may contain flaws, or the cod
ing processing may be incorrect. Edsger Dijkstra, a very distinguished
computer scientist, has observed that in most disciplines such difficul
ties are called errors or mistakes, but that in computing this terminol
ogy is usually softened, and flaws are called bugs .* (It seems that peo
ple are often more willing to tolerate errors in computer programs
than in other products, but more on this in later chapters.)
*Edsger Dijkstra, “On the Cruelty of Really Teaching Computing Science,”
Communications of the ACM , Volume 32, Number 12, December 1989, p. 1402.
Search WWH ::




Custom Search