Hardware Reference
In-Depth Information
Computer Science Departments at the time). Nowadays, few people think that the
size of the instruction set is a major issue, but the name stuck.
To make a long story short, a great religious war ensued, with the RISC sup-
porters attacking the established order (VAX, Intel, large IBM mainframes). They
claimed that the best way to design a computer was to have a small number of sim-
ple instructions that execute in one cycle of the data path of Fig. 2-2 by fetching
two registers, combining them somehow (e.g., adding or ANDing them), and stor-
ing the result back in a register. Their argument was that even if a RISC machine
takes four or five instructions to do what a CISC machine does in one instruction,
if the RISC instructions are 10 times as fast (because they are not interpreted),
RISC wins. It is also worth pointing out that by this time the speed of main memo-
ries had caught up to the speed of read-only control stores, so the interpretation
penalty had greatly increased, strongly favoring RISC machines.
One might think that given the performance advantages of RISC technology,
RISC machines (such as the Sun UltraSPARC) would have mowed down CISC
machines (such as the Intel Pentium) in the marketplace. Nothing like this has
happened. Why not?
First of all, there is the issue of backward compatibility and the billions of dol-
lars companies have invested in software for the Intel line. Second, surprisingly,
Intel has been able to employ the same ideas even in a CISC architecture. Starting
with the 486, the Intel CPUs contain a RISC core that executes the simplest (and
typically most common) instructions in a single data path cycle, while interpreting
the more complicated instructions in the usual CISC way. The net result is that
common instructions are fast and less common instructions are slow. While this
hybrid approach is not as fast as a pure RISC design, it gives competitive overall
performance while still allowing old software to run unmodified.
2.1.4 Design Principles for Modern Computers
Now that more than two decades have passed since the first RISC machines
were introduced, certain design principles have come to be accepted as a good way
to design computers given the current state of the hardware technology. If a major
change in technology occurs (e.g., a new manufacturing process suddenly makes
memory cycle time 10 times faster than CPU cycle time), all bets are off. Thus
machine designers should always keep an eye out for technological changes that
may affect the balance among the components.
That said, there is a set of design principles, sometimes called the RISC
design principles , that architects of new general-purpose CPUs do their best to
follow. External constraints, such as the requirement of being backward compati-
ble with some existing architecture, often require compromises from time to time,
but these principles are goals that most designers strive to meet. Next we will dis-
cuss the major ones.
 
 
Search WWH ::




Custom Search