Java Reference
In-Depth Information
Listing 7-5. The copyFileStep and copyFileJob
<step id="copyFileStep">
<chunk reader="customerFileReader" writer="outputWriter"
<job id="copyJob">
<step id="step1" parent="copyFileStep"/>
The interesting piece of all of this is the small amount of code required to read and write this file. In
this example, the only code you need to write is the domain object itself (Customer). Once you build
your application, you can execute it with the command shown in Listing 7-6.
Listing 7-6. Executing the copyJob
java -jar copyJob.jar jobs/copyJob.xml copyJob customerFile=/input/customer.txt
The output of the job is the same contents of the input file formatted according to the format string
of the writer, as shown in Listing 7-7.
Listing 7-7. Results of the copyJob
Michael T. Minella, 123 4th Street, Chicago IL 60606
Warren Q. Gates, 11 Wall Street, New York NY 10005
Ann B. Darrow, 350 Fifth Avenue, New York NY 10118
Terrence H. Donnelly, 4059 Mt. Lee Drive, Hollywood CA 90068
Fixed-width files are a form of input provided for batch processes in many enterprises. As you can
see, parsing the file into objects via FlatFileItemReader and FixedLengthTokenizer makes this process
easy. In the next section you will look at a file format that provides a small amount of metadata to tell us
how the file is to be parsed.
Delimited Files
Delimited files are files that provide a small amount of metadata within the file to tell us what the format
of the file is. In this case, a character acts as a divider between each field in your record. This metadata
provides us with the ability to not have to know what defines each individual field. Instead, the file
dictates to use what each field consists of by dividing each record with a delimiter.
As with fixed-width records, the process is the same to read a delimited record. The record will first
be tokenized by the LineTokenizer into a FieldSet. From there, the FieldSet will be mapped into your
domain object by the FieldSetMapper. With the process being the same, all you need to do is update the
LineTokenizer implementation you use to parse your file based upon a delimiter instead of premapped
Search WWH ::

Custom Search