Graphics Programs Reference
In-Depth Information
multitasking
throughput
cycle
%busy
latency
number of
level (k)
time
states
1
0.827
1.209
91.0%
1.209
57
2
0.903
1.107
99.3%
2.214
461
3
0.909
1.100
99.9%
3.300
1837
4
0.909
1.100
99.9%
4.400
5147
5
0.909
1.100
100%
5.500
11681
Table 11.5: Performance results: 2 × 2 mesh, preemptive interaction with
multitasking
multitasking
throughput
cycle
%busy
latency
number of
level (k)
time
states
1
0.604
1.657
66.4%
1.657
189
2
0.756
1.324
83.1%
2.648
5327
3
0.838
1.193
92.2%
3.579
37527
4
0.879
1.138
96.7%
4.552
141537
2 × 2 mesh, non-preemptive interaction
Table 11.6:
Performance results:
with multitasking
architecture with the same topology when the behaviour of each processor
is modelled as in Fig. 11.6.
Tables 11.5 and 11.6 report some results that can be obtained by the GSPN
analysis of these architectures for different values of the multitasking level.
The throughput indicates the number of tasks that are executed by each
processor per time unit. This performance parameter is obtained (with
reference to Figs. 11.5 and 11.6) , multiplying the rate of transition T loc ex
by the probability that M(p exec ) 1. In the case of preemptive interaction,
considering the values of λ and µ used to derive the results, the asymptotic
throughput is quickly reached with a multitasking level of only 3 tasks per
processor. When a non-preemptive policy is used, the performance obtained
at a reasonable multitasking level (4) is very close to the asymptotic value
(96.7%). Therefore, multitasking allows this simpler policy to be adopted
with a minimal penalty on performance.
The cycle time in Tables 11.5 and 11.6 is simply the reciprocal of the
throughput and is reported for sake of clarity and to derive task latency.
Latency represents the time between two successive reschedulings of a given
task and is computed by multiplying the cycle time by the number of tasks.
Latency corresponds to the total delay that a task experiences in the follow-
ing steps: execution on its local processor, and waiting for service from the
remote processor, receiving service from the remote processor, waiting in the
ready-task queue until the next rescheduling. Individual waiting times may
 
 
Search WWH ::




Custom Search