In chunk-oriented processing, input is read from a reader, optionally processed, and then
aggregated. Finally, at a configurable interval_an interval specified by the commit-interval attribute to
configure how many items will be processed before the transaction is committed_all the input is sent
to the writer . If there is a transaction manager in play, the transaction is also committed. Right before a
commit, the metadata in the database is updated to mark the progress of the job .
The first responsibility is reading a file from the file system. You use a provided implementation for the
example. Reading CSV files is a very common scenario, and Spring Batch's support does not disappoint.
The org.springframework.batch.item.file.FlatFileItemReader class delegates the task of delimiting
fields and records within a file to a LineMapper , which in turn delegates the task of identifying the end of
a record, and the fields within that record, to LineTokenizer and FieldSetMapper , respectively.
In this example, you use an org.springframework.batch.item.file.transform.DelimitedLineTokenizer ,
and tell it to identify fields delimited by a comma ( , ). You name the fields so that you can reference them
later in the configuration. These names don't have to be the values of some header row in the input file; they
just have to correspond to the order in which the fields are found in the input file. These names are also used
by the FieldSetMapper to match properties on a POJO. As each record is read, the values are applied to an
instance of a POJO, and that POJO is returned.
The class returned from the reader, UserRegistration , is a rather plain JavaBean. The
BeanWrapperFieldSetMapper class creates the POJO whose type is configured by the targetType
property and sets the JavaBean properties corresponding to the names given to the names property
of the DelimitedLineTokenizer .