Information Technology Reference
In-Depth Information
that is not parallelizable. Consequently, even if the remaining fraction 1 ˛ is per-
fectly parallelized, there is a theoretical upper limit on the best possible achievable
speedup. Amdahl's law [3] states the following:
1
1
˛ :
T.1/
S.P/
˛ C
T.1/
D
<
(10.17)
1 ˛
P
1 ˛
P
˛ C
For instance, if 1 % of a serial code is not parallelizable, the best possible speedup
cannot exceed 1=˛ D 1=0:01 D 100 , no matter how many processors are used.
Amdahl's law was derived in the 1960s, a time when parallel computing was
still in its infancy. The theory was in fact used as an argument against this new-born
technology. An important observation is that Amdahl's law assumes a fixed problem
size, no matter how many processors are used. It in effect says that it does not pay
to use too many processors if a threshold of parallel efficiency is to be maintained
(see Exercise 10.6 ). Amdahl's law indeed gives an unnecessarily pessimistic view,
because the non-parallelizable fraction of a code typically decreases as the problem
size increases. In other words, it pays off to use more processors when the problem
becomes larger.
Gustafson-Barsis's Law
There is another theory, known as Gustafson-Barsis's law [16] that looks at the
issue of scaled speedup. The basic idea is that P processors should be used to solve
a problem that is P times the problem size on one processor. Suppose the abso-
lute amount of serial computing time does not grow with problem size, and let ˛ P
denote the fraction of serial computing time in T.P/ ;then T.1/ would have been
P C P.1 ˛ P // T .P / . The scaled speedup can be computed as follows:
P C P.1 ˛ P // T .P /
T.P/
S.P/ D
D ˛ P CP.1˛ P / D P ˛ P .P 1/: (10.18)
An important assumption of Gustafson-Barsis's law is that the fraction of serial
computing time becomes increasingly small as the problem size increases, which
is true for most parallel codes. The consequence is that it is possible to achieve a
scaled speedup very close to P , provided the problem size is large enough.
Gustafson-Barsis's law can also be used to estimate speedup without knowing
T.1/ . For example, if ˛ P D 10 % is assumed for P D 1;000 , then the scaled
speedup of using 1,000 processors is, according to (10.18), P ˛ P .P 1/ D
10000:1.10001/ D 900:1 . This gives a corresponding parallel efficiency of 90 %.
As indicated by Gustafson-Barsis's law, when a desired speedup value is not
achievable due to Amdahl's law, it is time to consider enlarging the problem size.
After all, solving larger problems is the other important reason for adopting parallel
computing.
 
Search WWH ::




Custom Search