Java Reference
In-Depth Information
How It Works
This example demonstrates the simplest possible use of a Spring Batch: to provide scalability. This
program will do nothing but read data from a CSV file, with fields delimited by commas and rows
delimited by newlines) file and then insert the records into a table. You are exploiting the intelligent
infrastructure that Spring Batch provides to avoid worrying about scaling. This application could easily
be done manually. You will not exploit any of the smart transactional functionality made available to
you, nor will you worry about retries.
This solution is as simple as Spring Batch solutions get. Spring Batch models solutions using XML
schema. This schema is new to Spring Batch 2.0. The abstractions and terms are in the spirit of classical
batch processing solutions, so will be portable from previous technologies and perhaps in subsequent
technologies. Spring Batch provides useful default classes that you can override or selectively adjust. In
the following example, you'll use a lot of the utility implementations provided by Spring Batch.
Fundamentally, most solutions look about the same and feature a combination of the same set of
interfaces. It's usually just a matter of picking and choosing the right ones.
When I ran this program, it worked on files with 20,000 rows, and it worked on files with 1 million
rows. I experienced no increase in memory, which indicates there were no memory leaks. Naturally, it
took a lot longer! (The application ran for several hours with the 1 million row insert.)
Tip Of course, it would be catastrophic if you worked with a million rows and it failed on the penultimate
record because you'd lose all your work when the transaction rolled back! Read on for examples on “chunking.”
Additionally, you might want to read through Chapter 4 to brush up on transactions.
The following example inserts records into a table. I'm using PostgreSQL, which is a good mature
open source database; you can use any database you want. (More information and downloads are
available at http://www.postgresql .) Th e schema for the table is simple:
create table USER_REGISTRATION
(
ID bigserial not null ,
FIRST_NAME character varying(255) not null,
LAST_NAME character varying(255) not null,
COMPANY character varying(255) not null,
ADDRESS character varying(255) not null,
CITY character varying(255) not null,
STATE character varying(255) not null,
ZIP character varying(255) not null,
COUNTY character varying(255) not null,
URL character varying(255) not null,
PHONE_NUMBER character varying(255) not null,
FAX character varying(255) not null,
constraint USER_REGISTRATION_PKEY primary key (id)
) ;
Search WWH ::




Custom Search