Java Reference
In-Depth Information
C H A P T E R 1 0
■ ■ ■
Distributed Spring
In this chapter, you will learn the principles behind various distributed computing concepts, and how
to use Spring in conjunction with some very powerful, open-source-ish third-party products to build
solutions leveraging those concepts. Grid computing is a very important concept in today's world, for
many reasons. It solves many problems, some more ephemeral than others:
Scalability : Distribution provides a mechanism by which to expand capability to
scale an application to meet demand. This is simple, on the face of it: the more
computers responding to the same request means more people can make
requests. This is the quintessential reason behind clustering, and behind load
balancing.
Redundancy : Computers fail. It's built in. The only thing you can guarantee about
a hard disk of any make? That it will, at some point or another, fail, and more than
likely in your lifetime. Having the ability to have a computer take over when
something else becomes ineffective, or to have a computer's load lessened by
adjoining members in a cluster, is a valuable benefit of distribution.
Parallelization : Distribution enables solutions designed to split problems into
more manageable chunks, or to expedite processing by bringing more power
to bear on the problem. Some problems are inherently, embarrassingly
parallelizable. These often reflect real life. Take, for example, a process that's
designed to check hotels, car rentals, and airline accommodations and show you
the best possible options. All three checks can be done concurrently, as they share
no state. It would be a crime not to parallelize this sort of thing. Other problem
domains are not so clearly parallelizable. For example, a binary sort is an ideal
candidate for parallelization.
The other reasons are more subtle, but very real. Over the course of computing, we've clung to the
notion that computers will constantly expand in capacity with time. This has come to be known as
Moore's Law, named for Gordon Moore of Intel. Looking at history, you might remark that we've, in fact,
done quite well along that scale. Indeed, servers in the early 80s were an order of magnitude slower than
computers in the early 90s, and computers at the turn of the millennia were roughly an order of
magnitude faster than those in the early 90s. As I write this, in 2009, however, computers are not, strictly
speaking, similarly faster than the computers in the late 90s. They've become more parallelized, and can
better serve software designed to run in parallel. Thus, parallelization isn't just a good idea for big
problems; it's a norm just to take full advantage of modern-day computing power.
 
Search WWH ::




Custom Search