Hardware Reference
In-Depth Information
9.2.7
PERFORMANCE EVALUATION
In this section, the performance of the scheduling algorithms described above is tested
through simulation using a synthetic workload. Each plot on the graphs represents the
average of a set of 100 independent simulations, the duration of each is chosen to be
300,000 time units long. The algorithms are executed on task sets consisting of 100
aperiodic tasks, whose parameters are generated as follows. The worst-case execution
time C i is chosen as a random variable with uniform distribution between 50 and 350
time units. The interarrival time T i is modeled as a random variable with a Poisson
distribution with average value equal to T i = NC i , where N is the total number of
tasks and ρ is the average load. The laxity of a task is computed as a random value
with uniform distribution from 150 and 1850 time units, and the relative deadline is
computed as the sum of its worst-case execution time and its laxity. The task value
is generated as a random variable with uniform distribution ranging from 150 to 1850
time units, as for the laxity.
The first experiment illustrates the effectiveness of the guaranteed (GED) and robust
scheduling paradigm (RED) with respect to the best-effort scheme, under the EDF
priority assignment. In particular, it shows how the pessimistic assumptions made in
the guarantee test affect the performance of the algorithms and how much a reclaiming
mechanism can compensate for this degradation. In order to test these effects, tasks
were generated with actual execution times less than their worst-case values. The
specific parameter varied in the simulations was the average Unused Computation
Time Ratio , defined as
Actual Computation Time
Worst-Case Computation Time .
β =1
Note that if ρ n is the nominal load estimated based on the worst-case computation
times, the actual load ρ is given by
ρ = ρ n (1
β ) .
In the graphs shown in Figure 9.14, the task set was generated with a nominal load
ρ n =3, while β was varied from 0.125 to 0.875. As a consequence, the actual mean
load changed from a value of 2.635 to a value of 0.375, thus ranging over very different
actual load conditions. The performance was measured by computing the Hit Value
Ratio (HVR) ; that is, the ratio of the cumulative value achieved by an algorithm and
the total value of the task set. Hence, HV R =1means that all the tasks completed
within their deadlines and no tasks were rejected.
Search WWH ::




Custom Search