Database Reference
In-Depth Information
can be changed by using an NSB databus to set new limits in WCF. However, it will not
be sending very large files for a daily upload of files. For this reason, Salesforce has a data
loader.
In many systems, there are similar processes that derive from the Extract-Transform-
Load ( ETL ) process. See http://en.wikipedia.org/wiki/Extract,_transform,_load . An ETL
process will extract data from a system, say a SQL Server table that had changes for that
day; it will transform into data that can be loaded into the system that needs the daily data,
say an XML data file, and then load it into another system, say a Salesforce Cloud system.
Some systems may simply need a daily load of the information instead of a second-by-
second replay of the data that has changed. The thought is that instead of sending web ser-
vices or messages to cloud queues, a daily snapshot can be taken from the on-premise
MSMQ, or a SQL table, and sent securely to the cloud to be uploaded through SFTP, a se-
cure version of FTP. The diagram will be similar to this picture:
We could exponentially come up with a variety of ways to update the cloud with local
data or to load data to a new cloud system. Auditing and reporting should be one of the
characteristics of any form of sending data to the cloud, as any organization may be called
one day from an organization, such as the IRS, to show that the customers were initially
loaded and validated into the cloud solution. For this reason, using the saga design pattern
would be of a great benefit for taking a snapshot of messages that were sent to the cloud
solution through many of these means. Even in the SFTP solution, we could take a snap-
shot of which records were put into a file and verify the sending of the data to be uploaded
into the cloud database. The benefit of NSB is that we can take snapshots of messages,
and audit through queues and report on the interaction and endpoints.
Search WWH ::




Custom Search