Information Technology Reference
In-Depth Information
the objective of running a program in less time. On the earliest comput-
ers, a user could run only one program at a time. This being the case, a
computation-intensive program that took X minutes to run, using a tape
system for data I/O that took X minutes to run, would take a total of X + X
minutes to execute. To improve performance, early forms of parallel pro-
cessing were developed to allow interleaved execution of both programs
simultaneously. The computer would start an I/O operation (which is
typically measured in milliseconds), and while it was waiting for the I/O
operation to complete, it would execute the processor-intensive program
(measured in nanoseconds). The total execution time for the two jobs com-
bined became only slightly longer than the X minutes required for the I/O
operations to complete.
1.4.1 Multiprogramming
The next advancement in parallel processing was multiprogramming. In a
multiprogramming system, multiple programs submitted by users are each
allowed to use the processor for a short time, each taking turns and hav-
ing exclusive time with the processor in order to execute instructions. This
approach is known as round-robin scheduling (RR scheduling). It is one of
the oldest, simplest, fairest, and most widely used scheduling algorithms,
designed especially for time-sharing systems. In RR scheduling, a small unit
of time called a time slice is defined. All executable processes are held in a
circular queue. The time slice is defined based on the number of executable
processes that are in the queue. For example, if there are five user processes
held in the queue and the time slice allocated for the queue to execute in total
is 1 s, each user process is allocated 200 ms of process execution time on the
CPU before the scheduler begins moving to the next process in the queue.
The CPU scheduler manages this queue, allocating the CPU to each process
for a time interval of one time slice. New processes are always added to the
end of the queue. The CPU scheduler picks the first process from the queue,
sets its timer to interrupt the process after the expiration of the timer, and
then dispatches the next process in the queue. The process whose time has
expired is placed at the end of the queue. If a process is still running at the
end of a time slice, the CPU is interrupted and the process goes to the end of
the queue. If the process finishes before the end of the time slice, it releases
the CPU voluntarily. In either case, the CPU scheduler assigns the CPU to the
next process in the queue. Every time a process is granted the CPU, a context
switch occurs, which adds overhead to the process execution time. To users,
it appears that all of the programs are executing at the same time.
Resource contention problems often arose in these early systems. Explicit
requests for resources led to a condition known as deadlock. Competition for
resources on machines with no tie-breaking instructions led to the critical
section routine. Contention occurs when several processes request access to
the same resource. In order to detect deadlock situations, a counter for each
Search WWH ::




Custom Search