Database Reference
In-Depth Information
1.3 TASKS AND JOBS IN DISTRIBUTED PROGRAMS
Another common term in the theory of parallel/distributed programming is multi-
tasking . Multitasking is referred to overlapping the computation of one program
with that of another. Multitasking is central to all modern operating systems (OSs),
whereby an OS can overlap computations of multiple programs by means of a scheduler.
Multitasking has become so useful that almost all modern programming languages are
now supporting multitasking via providing constructs for multithreading . A thread of
execution is the smallest sequence of instructions that an OS canmanage through its
scheduler. The term thread was popularized by Pthreads (POSIX threads [59]), a speci-
fication of concurrency constructs that has been widely adopted, especially in UNIX
systems [8]. A technical distinction is often made between processes and threads . A
process runs using its own address space while a thread runs within the address space
of a process (i.e., threads are parts of processes and not standalone sequences of instruc-
tions). A process can contain one or many threads. In principle, processes do not share
address spaces among each other, while the threads in a process do share the process's
address space. The term task is also used to refer to a small unit of work. In this chap-
ter, we use the term task to denote a process, which can include multiple threads. In
addition, we refer to a group of tasks (which can only be one task) that belong to the
same program/application as a job . An application can encompass multiple jobs. For
instance, a fluid dynamics application typically consists of three jobs, one responsible
for structural analysis, one for fluid analysis, and one for thermal analysis. Each of these
jobs can in return have multiple tasks to carry on the pertaining analysis. Figure 1.2
demonstrates the concepts of processes, threads, tasks, jobs, and applications.
1.4 MOTIVATIONS FOR DISTRIBUTED PROGRAMMING
In principle, every sequential program can be parallelized by identifying sources of
parallelism in it. Various analysis techniques at the algorithm and code levels can be
applied to identify parallelism in sequential programs [67]. Once sources of paral-
lelism are detected, a program can be split into serial and parallel parts as shown in
read2
read1read3
read1read2
read
Process1/Task1
Process2/Task2
Process/Task
Job1
Job2
Distributed application/program
FIGURE 1.2 A demonstration of the concepts of processes, threads, tasks, jobs, and
applications.
Search WWH ::




Custom Search