Hardware Reference
In-Depth Information
Yet another facility provided by all POSIX-conformant UNIX systems is the
ability to have multiple threads of control within a single process. These threads of
control, usually just called threads , are like lightweight processes that share a
common address space and everything associated with that address space, such as
file descriptors, environment variables, and outstanding timers. However, each
thread has its own program counter, own registers, and own stack. When a thread
blocks (i.e., has to stop temporarily until I/O completes or some other event hap-
pens), other threads in the same process are still able to run. Two threads in the
same process operating as a producer and consumer are similar, but not identical,
to two single-thread processes that are sharing a memory segment containing a
buffer. The differences have to do with the fact that in the latter case, each process
has its own file descriptors, etc., whereas in the former case all of these items are
shared. We saw the use of Java threads in our producer-consumer example earlier.
Often the Java runtime system uses an operating system thread for each of its
threads, but it does not have to do this.
As an example of where threads might be useful, consider a World Wide Web
server. Such a server might keep a cache of commonly used Web pages in main
memory. If a request is for a page in the cache, the Web page is returned im-
mediately. Otherwise, it is fetched from disk. Unfortunately, waiting for the disk
takes a long time (typically 20 msec), during which the process is blocked and can-
not serve new incoming requests, even those for Web pages in the cache.
The solution is to have multiple threads within the server process, all of which
share the common Web page cache. When one thread blocks, other threads can
handle new requests. To prevent blocking without threads, one could have multiple
server processes, but this would probably entail replicating the cache, thus wasting
valuable memory.
The UNIX standard for threads is called pthreads , and is defined by POSIX
(P1003.1C). It contains calls for managing and synchronizing threads. It is not de-
fined whether threads are managed by the kernel or entirely in user space. The
most commonly used thread calls are listed in Fig. 6-44.
Let us briefly examine the thread calls shown in Fig. 6-44. The first call,
pthread create , creates a new thread. After successful completion, one more
thread is running in the called's address space than before the call. A thread that
has done its job and wants to terminate calls pthread exit . A thread can wait for
another thread to exit by calling pthread join . If the thread waited for has already
exited, the pthread join finishes immediately. Otherwise it blocks.
Threads can synchronize using mutexes . A mutex guards some resource, such
as a buffer shared by two threads. To make sure that only one thread at a time ac-
cesses the shared resource, threads are expected to lock the mutex before touching
the resource and unlock it when they are done. As long as all threads obey this
protocol, race conditions can be avoided. Mutexes are like binary semaphores
(semaphores that can take on only the values of 0 and 1). The name ''mutex''
comes from the fact that mutexes are used to ensure mutual exclusion.
 
 
Search WWH ::




Custom Search