Java Reference
In-Depth Information
8-5. Integrating Two Systems Using a File System
Problem
You want to build a solution that takes files on a well-known, shared file system and uses them as the
conduit for integration with another system. An example might be that your application produces a
comma-separated value (CSV) dump of all the customers added to a system every hour. The company's
third-party financial system is updated with these sales by a process that checks a shared folder,
mounted over a network file system, and processes the CSV records. What's required is a way to treat the
presence of a new file as an event on the bus.
Solution
You have an idea of how this could be built by using standard techniques, but you want something more
elegant. Let Spring Integration isolate you from the event-driven nature of the file system and from the
file input/output requirements and instead let's use it to focus on writing the code that deals with the
File payload itself. With this approach, you can write unit-testable code that accepts an input and
responds by adding the customers to the financial system. When the functionality is finished, you
configure it in the Spring Integration pipeline and let Spring Integration invoke your functionality
whenever a new file is recognized on the file system. This is an example of an event-driven architecture
(EDA). EDAs let you ignore how an event was generated and focus instead on reacting to them, in much
the same way that event-driven GUIs let you change the focus of your code from controlling how a user
triggers an action to actually reacting to the invocation itself. Spring Integration makes it a natural
approach for loosely coupled solutions. In fact, this code should look very similar to the solution you
built for the JMS queue because it's just another class that takes a parameter (a Message , a parameter of
the same type as the payload of the message, and so on).
How It Works
Concerns in Dealing with a File System
Building a solution to talk to JMS is old hat. Instead, let's consider what building a solution using a
shared file system might look like. Imagine how to build it without an ESB solution. You need some
mechanism by which to poll the file system periodically and detect new files. Perhaps Quartz and some
sort of cache? You need something to read these files in quickly and then pass the payload to your
processing logic efficiently. Finally, your system needs to work with that payload.
Spring Integration frees you from all that infrastructure code; all you need to do is configure it.
There are some issues with dealing with file system-based processing, however, that are up to you to
resolve. Behind the scenes, Spring Integration is still dealing with polling the file system and detecting
new files. It can't possibly have a semantically correct idea for your application of when a file is
“completely” written, and thus providing a way around that is up to you.
Several approaches exist. You might write out a file and then write another 0-byte file and let Spring
Integration detect that file. The presence of that file would mean it's safe to assume that the real payload
is present. Configure Spring Integration to look for that file. If it finds it, it knows that there's another file
(perhaps with the same name and a different file extension?) and that it can start reading it/working with
it. Another solution along the same line is to have the client (“producer”) write the file to the directory
using a name that the glob pattern Spring Integration is using to poll the directory won't detect. Then,
when it's finished writing, issue an mv command if you trust your file system to do the right thing there.
 
Search WWH ::




Custom Search