img
. . .
·
If your program is highly CPU bound and you do some I/O, one LWP per CPU and
enough to cover all simultaneous blocking system calls[9] is called for.
[9]
Blocking system calls include all calls to the usual system calls such as read(), but
any thread that blocks on a cross-process synchronization variable should also be counted.
Bound threads are independent of this, as they each have their own LWP.
·
If your program is only I/O bound, you'll want as many LWPs as simultaneous blocking
system calls.
How to Get Those LWPs in Java
And now we get to the specifics. This is the one area where things get very implementation and
platform dependent. This is also an issue that has aroused great debate in the halls of
comp.programming.threads. Voices have been raised, enormous volumes of argument have been
written, veritable fisticuffs have been exchanged over this!
First let's consider what we really want from our scheduler. We want all of our runnable threads to
run as much as possible. We want to make as many blocking system calls as we feel like making,
and we want them to execute concurrently.
One implementational technique for getting this effect is to use bound threads. Another is to
ensure that the library creates a sufficient number of LWPs and guarantees that the runnable
threads will be time sliced.
In Windows NT there is no issue with the number of LWPs available for a Java program. NT uses
bound threads for everything, so you get all the LWP equivalents you need. Digital UNIX
implements its library in such a fashion that you get one "virtual processor" (LWP equivalent) for
each actual CPU and one more for every outstanding I/O request. So there are no such problems
with Digital UNIX.
If you are running on a system that implements only PCS scheduling for Java threads (e.g., Solaris)
there is no portable mechanism for specifying how many LWPs you'd like. Moreover, it is
possible that you will want more LWPs than the system will give you automatically. This is one of
those (very few) unfortunate places where the default is not what you want and you are forced to
make a call to native code.
In Solaris you are provided with only one LWP by default. If all the LWPs in a process are
blocked, waiting for I/O, Solaris will add another LWP if needed. This ameliorates the problem
partially but still does not provide the full complement of LWPs if you either have multiple CPUs
or don't make enough blocking calls. In most typical cases you will not get as many LWPs as
you'd like. In Solaris, you are forced to make a native call to pthread_setconcurrency() to
obtain the "expected" level of kernel concurrency. Obviously, this is not a good thing and makes a
mess of your 100% pure Java program, but it is necessary for most high-performance MT
programs. The technique for doing this is straightforward and shown in Making a Native Call to
pthread_setconcurrency().
Changing Scheduling Parameters for LWPs
Just because a thread is bound to an LWP does not imply that the LWP is going to be scheduled
on a CPU immediately. Depending upon the nature of your application requirements, you may
need to alter the kernel-level scheduling priority of that LWP. If you need merely to ensure that it
gets a CPU within a second, then relying upon the normal time-slicing scheduler is probably
sufficient.
Search WWH :
Custom Search
Previous Page
Multithreaded Programming with JAVA - Topic Index
Next Page
Multithreaded Programming with JAVA - Bookmarks
Home