Information Technology Reference
In-Depth Information
I 1
I 2
I 3
F1 D1 E1 W1
F2
D2
E2
W2
F3 D3 E3 W3
( a ) Sequential Processing
Time
I 1
F1
D1 E1 W1
I 2
F2
D2
E2
W2
F3 D3 E3 W3
I 3
123456789012
Time
( b ) Pipelining
F1 D1 E1 W1
F2 D2 E2 W2
F3 D3 E3 W3
F4 D4 E4 W4
F5 D5 E5 W5
F6 D6 E6 W6
F7 D7 E7 W7
F8 D8 E8 W8
F9 D9 E9 W9
Time
( c ) Multiple issue
Figure 9.17 Multiple issue versus pipelining versus sequential processing
Branch prediction and speculative execution are also performed during the fetch
stage. This is done in order to keep on fetching instructions beyond branch and
jump instructions.
Decoding is done in two steps. Predecoding is performed between the main
memory and the cache and is responsible for identifying branch instructions.
Actual decoding is used to determine the following for each instruction: (1) the oper-
ation to be performed; (2) the location of the operands; and (3) the location where
the results are to be stored. During the issue stage, those instructions among the
dispatched ones that can start execution are identified. During the commit stage,
generated values
results are written into their destination registers.
The most crucial step in processing instructions in SPAs is the dependency analy-
sis. The complexity of such analysis grows quadratically with the instruction word
size. This puts a limit on the degree of parallelism that can be achieved with SPAs
such that a degree of parallelism higher than four will be impractical. Beyond this
/
 
Search WWH ::




Custom Search