img
.
Many numerical programs (e.g., matrix operations) are made up of huge numbers of tiny, identical,
and independent operations. They are most easily (well, most commonly) expressed as loops
inside loops. Slicing these loops into appropriate-sized chunks for threads is slightly more
complicated, and there would be no reason to do so, save for the order-N speedup that can be
obtained on an N-way SMP machine.
Old code
These are the "slightly modified existing systems." This is existing code that makes you think to
yourself: "If I just change a few bits here and there, add a few locks, then I can thread it and
double my performance."
It's true, it is possible to do this, and there are lots of examples. However, this is a tough situation
because you will constantly be finding new interactions that you didn't realize existed before. In
such cases (which, due to the nature of the modern software industry, are far too common), you
should concentrate on the bottlenecks and look for absolutely minimal submodules that can be
rewritten. It's always best to take the time to do it right: re-architect and write the program
correctly from the beginning.
Automatic Threading
In a subset of cases, it is possible for a compiler to do the threading for you. If you have a program
written in such a way that a compiler can analyze its structure, analyze the interdependencies of
the data, and determine that parts of your program can run simultaneously without data conflicts,
then the compiler can build the threads.
With current technology, the capabilities above are limited largely to Fortran programs that have
time-consuming loops in which the individual computations in those loops are obviously
independent. The primary reason for this limitation is that Fortran programs tend to have very
simple structuring, both for code and data, making the analysis viable. Languages like C, which
have constructs such as pointers, make the analysis enormously more difficult. There are MP
compilers for C, but far fewer programs can take advantage of such compiling techniques.
With the different Fortran MP compilers,[3] it is possible to take vanilla Fortran 77 or 90 code,
make no changes to it whatsoever, and have the compiler turn out threaded code. In some cases it
works very well; in others, not. The cost of trying it out is very small, of course. A number of Ada
compilers will map Ada tasks directly on top of threads, allowing existing Ada programs to take
advantage of parallel machines with no changes to the code.
[3]
Digital's Fortran compiler, SunŽ Fortran MP, Kuck and Associates' Fortran compiler, EPC's
Fortran compiler, SGI's MP Fortran compiler, probably more.
Programs Not to Thread
Then there is a large set of programs that it doesn't make any sense to thread. Probably 99% of all
programs either do not lend themselves easily to threading or run just fine the way they are. Some
programs simply require separate processes in which to run. Perhaps they need to execute one task
as root but need to avoid having any other code running as root. Perhaps the program needs to be
able to control its global environment closely, changing working directories, etc. Most programs
run quite fast enough as they are and don't have any inherent multitasking, such as an icon editor
or a calculator application.
In all truth, multithreaded programming is more difficult than regular programming. There are a
host of new problems that must be dealt with, many of which are difficult. Threads are of value
primarily when the task at hand is complex.
Search WWH :
Custom Search
Previous Page
Multithreaded Programming with JAVA - Topic Index
Next Page
Multithreaded Programming with JAVA - Bookmarks
Home