Information Technology Reference
In-Depth Information
instruction per clock cycle. RISC started as a notion in the mid-1970s and has even-
tually led to the development of the first RISC machine, the IBM 801 minicomputer.
The launching of the RISC notion announces the start of a new paradigm in the
design of computer architectures. This paradigm promotes simplicity in computer
architecture design. In particular, it calls for going back to basics rather than provid-
ing extra hardware support for high-level languages. This paradigm shift relates to
what is known as the semantic gap, a measure of the difference between the oper-
ations provided in the high-level languages (HLLs) and those provided in computer
architectures.
It is recognized that the wider the semantic gap, the larger the number of undesirable
consequences. These include (a) execution inefficiency, (b) excessive machine pro-
gram size, and (c) increased compiler complexity. Because of these expected conse-
quences, the conventional response of computer architects has been to add layers of
complexity to newer architectures. These include increasing the number and complex-
ity of instructions together with increasing the number of addressing modes. The archi-
tectures resulting from the adoption of this “add more complexity” are now known as
Complex Instruction Set Computers (CISCs). However, it soon became apparent that a
complex instruction set has a number of disadvantages. These include a complex
instruction decoding scheme, an increased size of the control unit, and increased
logic delays. These drawbacks prompted a team of computer architects to adopt the
principle of “less is actually more.” A number of studies were then conducted to inves-
tigate the impact of complexity on performance. These are discussed below.
10.2. RISCs DESIGN PRINCIPLES
A computer with the minimum number of instructions has the disadvantage that a
large number of instructions will have to be executed in realizing even a simple
function. This will result in a speed disadvantage. On the other hand, a computer
with an inflated number of instructions has the disadvantage of complex decoding
and hence a speed disadvantage. It is then natural to believe that a computer with
a carefully selected reduced set of instructions should strike a balance between
the above two design alternatives. The question then becomes what constitutes a
carefully selected reduced set of instructions? In order to arrive at an answer to
this question, it is necessary to conduct in-depth studies on a number of aspects
of computation. These aspects should include (a) operations that are most frequently
performed during execution of typical (benchmark) programs, (b) operations that are
most time consuming, and (c) the type of operands that are most frequently used.
A number of early studies were conducted in order to find out the typical break-
down of operations that are performed in executing benchmark programs. The esti-
mated distribution of operations is shown in Table 10.1.
A careful look at the estimated percentage of operations performed reveals that
assignment statements, conditional branches, and procedure calls constitute about
90% of the total operations performed, while other operations, however complex
they may be, make up the remaining 10%.
Search WWH ::




Custom Search