Java Reference
In-Depth Information
That's it! The explanation may seem a bit overwhelming, but to run this example, you build the two
artifacts discussed earlier: one for the slave JVMs and one for the job. To test all the pieces, you need to
start at least three things:
ActiveMQ: In order for your JVMs to communicate with each other, you need to
run the ActiveMQ server. You can download it from .
From ActiveMQ's bin directory, execute the activeMq script to launch the server.
Slave JVMs: You can start as many of these as you wish. These are the JVMs that
execute the ItemProcessor on each item the slave reads off of the queue. To start
the slave JVMs, execute the command java -jar remote-chunking-0.0.1-
listener-SNAPSHOT.jar for each of the slaves you wish to run.
The job: The last step of launching this job is to execute the jar file that is
configured to execute the job. You execute it like any other job, with the command
java -jar remote-chunking-0.0.1-SNAPSHOT.jar jobs/geocodeJob.xml
geocodingJob .
Spring Batch takes care of the rest!
But how do you know that your slaves did some of the work? The proof is in the output. The first
place to look is the database, where the longitude and latitude for each customer should now be
populated. Above and beyond that, each slave node as well as the JVM in which the job was run has
output statements showing who was processed at each node. Listing 11-28 shows an example of the
output from one of the slaves.
Listing 11-28. Results of geocodingJob
2011-04-11 21:49:31,668 DEBUG
[org.springframework.batch.integration.chunk.ChunkProcessorChunkHandler] - <Handling chunk:
ChunkRequest: jobId=8573, sequence=9, contribution=[StepContribution: read=0, written=0,
filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING], item count=1>
("******** I'm going to process Merideth Gray lives at 303 W Comstock Street,Seattle
2011-04-11 21:49:31,971 DEBUG
[org.springframework.batch.item.database.JdbcBatchItemWriter] - <Executing batch with 1
You may notice that not only are the slave nodes processing your items, but the local JVM that is
executing the job is also processing items. The reason is in your configuration. Because the job's
configuration contains the information for the listener, the local JVM has a listener processing items just
like any other slave. This is a good thing because there is rarely a reason to completely offload all the
processing to other nodes, while the JVM executing your batch job sits doing nothing other than
listening for results.
Remote chunking is a great way to spread the cost of processing items across multiple JVMs. It has
the benefits of requiring no changes to your job configuration and using dumb workers (workers with no
knowledge of Spring Batch or your job's database, and so on). But keep in mind that durable
Search WWH ::

Custom Search