Java Reference
In-Depth Information
wide area, each working on parts of the problem. SETI@home [1] is an example.
Millions of users worldwide have downloaded a client application that processes
radio satellite data searching for repetitive signals that might indicate intelligent
life elsewhere in the universe. Each client application obtains a small chunk of
data, processes it, and returns the results to the central SETI@home computer
for compilation. In this concept each client is nearly completely independent,
not interacting with, or even aware of, the existence of other clients. There have
been other similar distributed applications ranging from encryption/decryption
applications to climate study applications (see [2]).
Another distributed computing concept is massively parallel computing on a
parallel computer such as the NEC Earth Simulator computer in Japan [3], and,
in the USA, the IBM Blue Gene/L at IBM's Thomas Watson Research Center [4],
the Apple “BigMac” at Virginia Tech University [5], or the National Leadership
Computing Facility being built at Oak Ridge National Laboratory [6], which is
targeted to be the world's fastest scientific research computer when completed
[7, 8]. All of these systems are clusters of 1000 or more processors. In fact, the
biannual Top 500 ranking of the world's fastest supercomputers is dominated by
massively parallel machines [9]. A related idea is the Parallel Virtual Machine
(PVM) system in which many disparate computers, large and small, and possibly
even using multiple operating systems, are linked together via the Internet to
create a virtual parallel machine [10]. On these parallel systems, the different
parts of the calculation typically interact with each other in some way. While
Java code is portable to any platform on which a JVM is available, there may be
no JVM on the most exotic supercomputer designs. However, several of the Top
500 supercomputers are Linux clusters, upon which one could probably install
the Java Runtime Environment for Linux. Whether or not a JVM is available on
these massively parallel supercomputing systems, such is not the topic of this
chapter.
Distributed computing in an object-oriented view involves distributing the
software objects over multiple nodes or hosts, perhaps utilizing mobile objects
that move from node to node as needed instead of pre-configuring the work to be
done at each node. Intelligent mobile agent software is an example. Intelligent
agents are also not the subject of this chapter.
For general scientific computing, as opposed to state-of-the-art supercom-
puting, the distributed computing techniques one needs to know are much less
grandiose, though still very useful. In this chapter, we discuss a simple two-node
distributed computing concept (a paradigm that should now be familiar from
the previous chapters where it was known as client/server computing). As we
have discussed, in a client/server arrangement, the client typically is a GUI in
which the user prepares input and views output in a graphical interface. Heavy
duty computations are routed to a remote server machine. There might be many
reasons to separate the heavy computations to a server. One obvious reason is
to improve performance when the calculations to be done are so intensive that
Search WWH ::




Custom Search