img
.
The OS can be viewed as one gigantic program with many library calls into it [read(), write(),
time(), etc.]. Kernels are unusual in that they have always been designed for a type of
concurrency. DOS is simple and allows no concurrent calls. If your program blocks while reading
from disk, everything waits. Multitasking systems, on the other hand, have always allowed
blocking system calls to execute concurrently. The calls would get to a certain point [say, when
read() actually issues the disk request], save their own state, and then go to sleep on their own.
This technique was nonpreemptive, and it did not allow for parallelism. Code paths between
context switching points could be very long, so few systems claimed any time of realtime behavior.
In the first case in Figure 11-1 (which is like SunOS 4.1.3 and most early operating systems), only
one process can be executing a system call at any one time. Many processes may be blocked in the
middle of a system call, but only one may be running. In the second case, locks are put around
each major section of code in the kernel, so several processes can be executing system calls, as
long as the calls are to different portions of the kernel. In the third case (like most current systems),
the granularity of the locks has been reduced to the point that many threads can be executing the
same system calls, as long as they don't use exactly the same structures.
Figure 11-1. Concurrency within the Kernel
Now, if you take these diagrams and substitute processor for process, you will get a slightly
different picture, but the results will be largely the same. If you can execute several things
concurrently, with preemptive context switching, you can execute them in parallel. A slightly
different but perfectly valid way of looking at this is to consider it in terms of critical sections. In
the "no concurrency" case, the critical section is very large--it's the whole kernel. In the "more
concurrency" case, there are lots of little critical sections.
Symmetric Multiprocessing
SMP merely means that all processors are created equal and endowed by their designers with
certain inalienable functionalities. Among these functionalities are shared memory, the ability to
run kernel code, and the processing of interrupts. The ability of more than one CPU to run kernel
code simultaneously is merely an issue of concurrency--an important issue, of course, but not a
defining one.
Search WWH :
Custom Search
Previous Page
Multithreaded Programming with JAVA - Topic Index
Next Page
Multithreaded Programming with JAVA - Bookmarks
Home