This turns out to be of limited value. The 80 percent case is that you'll need to bind parameters from
the job 's launch to the Spring beans in the application context. These parameters are available only at
runtime, whereas the step s in the XML application context are configured at design time. This happens
in many places. Previous examples demonstrated ItemWriter s and ItemReader s with a hard-coded path.
That works fine unless you want to parameterize the file name. This is hardly acceptable unless you plan
on using a job just once!
The core Spring Framework 3.0 features an enhanced expression language that Spring Batch 2.0
(depending on Spring Framework 3.0) uses to defer binding of the parameter until the correct time. Or,
in this case, until the bean is in the correct scope. Spring Batch 2.0 introduces the "step" scope for just
this purpose. Let's take a look at how you'd rework the previous example to use a parameterized file
name for the ItemReader 's resource:
<!-- … this is the same as before…-->
All you did is scope the bean (the FlatFileItemReader ) to the life cycle of a step (at which point
those JobParameters will resolve correctly) and then used the EL syntax to parameterize the path to
work off of.
This chapter introduced you to the concepts of batch processing, some of its history, and why it fits in a
modern day architecture. You learned about Spring Batch, the batch processing from SpringSource, and
how to do reading and writing with ItemReader and ItemWriter implementations in your batch job s. You
wrote your own ItemReader , and ItemWriter implementations, as needed, and saw how to control the
execution of step s inside a job .
The next chapter will discuss Terracotta and GridGain. You'll learn how to use Terracotta to build a
distributed cache and take your Spring applications onto the grid with GridGain.