Let's consider a human analogy: a bank. A bank with one person working in it (traditional process)
has lots of "bank stuff," such as desks and chairs, a vault, and teller stations (process tables and
variables). There are lots of services that a bank provides: checking accounts, loans, savings
accounts, etc. (the functions). With one person to do all the work, that person would have to know
how to do everything, and could do so, but it might take a bit of extra time to switch among the
various tasks. With two or more people (threads), they would share all the same "bank stuff," but
they could specialize in their different functions. And if they all came in and worked on the same
day, lots of customers could get serviced quickly.
To change the number of banks in town would be a big effort (creating new processes), but to hire
one new employee (creating a new thread) would be very simple. Everything that happened inside
the bank, including interactions among the employees there, would be fairly simple (user space
operations among threads), whereas anything that involved the bank down the road would be
much more involved (kernel space operations between processes).
When you write a multithreaded program, 99% of your programming is identical to what it was
before--you spend your efforts in getting the program to do its real work. The other 1% is spent in
creating threads, arranging for different threads to coordinate their activities, dealing with thread-
specific data, etc. Perhaps 0.1% of your code consists of calls to thread functions.
We've now covered the basic concept of threads at the user level. As noted, the concepts and most
of the implementational aspects are valid for all thread models. What's missing is the definition of
the relationship between threads and the operating systems. How do system calls work? How are
threads scheduled on CPUs?
It is at this level that the various implementations differ significantly. The operating systems
provide different system calls, and even identical system calls can differ widely in efficiency and
robustness. The kernels are constructed differently and provide different resources and services.
Keep in mind as we go through this implementation aspect that 99% of your threads programming
will be done above this level, and the major distinctions will be in the area of efficiency.
Concurrency vs. Parallelism
Search WWH :