Information Technology Reference
In-Depth Information
Figure7.6: Per-processor scheduling data structures.
each processor. Figure ?? illustrates this. Each processor uses anity schedul-
ing :
once a thread is scheduled on a processor, it is returned to the same
Denition: anity
scheduling
processor when it is rescheduled, maximizing cache reuse. Each processor looks
at its own copy of the queue for new work to do; this can mean that some pro-
cessors can idle while others have work waiting to be done. Rebalancing occurs
only if the queue lengths are persistent enough to compensate for the time to
reload the cache for the migrated threads. Because rebalancing is possible, the
per-processor data structures must still be protected by locks, but in the com-
mon case the next processor to use the data will be the last one to have written
it, minimizing cache coherence overhead and lock contention.
7.2.2
Scheduling Parallel Applications
A different set of challenges occurs when scheduling parallel applications onto
a multiprocessor. There is often a natural decomposition of a parallel appli-
cation onto a set of processors. For example, an image processing application
may divide the image up into equal size chunks, and assign one to each pro-
cessor. While the application could divide the image into many more chunks
than processors, this comes at a cost in eciency: less cache reuse and more
communication to coordinate work at the boundary between each chunk.
If there are multiple applications running at the same time, the application
may receive fewer or more processors than it expected or started with. Applica-
tions can come and go, acquiring processing resources and releasing them. Even
without multiple applications, the operating system itself will have system tasks
to run from time to time, disrupting the mapping of parallel work onto a fixed
number of processors.
Oblivious Scheduling
One might imagine that the scheduling algorithms we've already discussed can
take care of these cases. Each thread is time-sliced onto the available processors;
if two or more applications create more threads in aggregate than processors,
multi-level feedback will ensure that each thread makes progress and receives
a fair share of the processor. This is often called oblivious scheduling , as the
Denition: oblivious
scheduling
operating system scheduler operates without knowledge of the intent of the
parallel application | each thread is scheduled as a completely independent
entity.
Unfortunately, several problems can occur with oblivious scheduling on mul-
tiprocessors:
Search WWH ::




Custom Search