to be processed. If you were to process each one with a separate writer like you have been up to now,
there would be a number of issues from performance to maintainability. So how does Spring Batch
provide for the ability to read in multiple files with the same format?
Using a similar pattern to the one you just used in the multiline record example, Spring Batch
provides an ItemReader called the MultiResourceItemReader. This reader wraps another ItemReader
like the CustomerFileItemReader did; however, instead of defining the resource to be read as part of the
child ItemReader, a pattern that defines all of the files to be read is defined as a dependency of the
MultiResourceItemReader. Let's take a look.
You can use the same file format as you did in your multi-record example (as shown in Listing 7-19),
which will allow you to use the same ItemReader configuration you created in the multiline example as
well. However, if you have five of these files with the filenames customerFile1.txt , customerFile2.txt ,
customerFile3.txt , customerFile4.txt , and customerFile5.txt , you need to make two small updates.
The first is to the configuration. You need to tweak your configuration to use the
MultiResyourceItemReader with the correct resource pattern. You will also remove the reference to the
input resource ( <beans:property name="resource" ref="customerFile" /> ) from the FlatFileItemReader
that you have used up to this point. Listing 7-31 shows the updated configuration.
Listing 7-31. Configuration to Process Multiple Customer Files
<beans:property name="resources" value="file:/Users/mminella/temp/customerFile*.csv"/>
<beans:property name="delegate" ref="fullCustomerFileReader"/>
<beans:property name="delegate" ref="trueCustomerFileReader"/>
<beans:entry key="CUST*" value-ref="customerLineTokenizer"/>
<beans:entry key="TRANS*" value-ref="transactionLineTokenizer"/>
<beans:entry key="CUST*" value-ref="customerFieldSetMapper"/>
<beans:entry key="TRANS*" value-ref="transactionFieldSetMapper"/>