Java Reference
In-Depth Information
<lock-level>write</lock-level>
</autolock>
</locks>
</jee-application>
</spring>
</application>
You can still exercise all the same controls over the fields and so on as you could in the previous
example, but in this case it might be easier to simply configure what shouldn't be clustered, such as
injected connection pools, and so forth.
<beans>
<bean name="customerService">
<non-distributed-field> connectionPool </non-distributed-field>
</bean>
</beans>
Running this example is no different than in the last example; simply specify a different
configuration XML file.
10-3. You Want to Farm Out Execution to a Grid
Problem
You want to distribute processing over many nodes, perhaps to increase result speed through the use of
concurrences, perhaps merely to provide load balance and fault tolerance.
Solution
You can use something like GridGain, which was designed to transparently offload processing to a grid.
This can be done many ways: one is to use the grid as a load alleviation mechanism, something to absorb
the extra work. Another, if possible, is to split the job up in such a way that many nodes can work on it
concurrently.
Approach
GridGain is an implementation of a processing grid. GridGain is different from Terracotta or Coherence
because they are data grids. Data grids and processing grids are often used together, and in point of fact
GridGain encourages the use of any number of data grids with its processing functionality. There are
many data grids, such as Hadoop's HFS, which are designed to be fault-tolerant memory-based disks.
These sorts of grids are natural compliments to a processing grid such as GridGain in that they can field
massive amounts of data fast enough for a processing grid to keep busy. GridGain allows code to be
farmed out to a grid for execution and then the results returned to the client, transparently. You can do
this in many ways. The easiest route is to merely annotate the methods you want to be farmed out, and
then configure some mechanism to detect and act on those annotations, and then you're done!
The other approach is slightly more involved, but it is where solutions such as GridGain and
Hadoop really shine: use the Map/Reduce pattern to partition a job into smaller pieces and then run
those pieces on the grid, concurrently. Map/Reduce is a pattern that was popularized by Google.
Map/Reduce comes from functional programming languages, which often have map() and reduce()
functions. The idea is that you somehow partition a job and send those pieces to be processed. Finally,
 
Search WWH ::




Custom Search