Databases Reference
In-Depth Information
Concurrency control becomes extremely difficult unless all clients update the
servers in the same sequence. The asynchronous interleaving of transaction
makes this difficult if not impossible to control, making this strategy impossible
for several classes of applications.
N times the server capacity is required to provide N - 1 redundancy, which is
costly.
The database application must perform N times the processing for each transac-
tion or read request to the database. Thus, application performance is signifi-
cantly impacted. If the bulk of the application performance is spent in database
processing time, then a two-way solution would require 200% execution time
in the application.
When one of the replicas fails, a complex process is required to bring it back up
to date with the remaining servers when it comes back online.
A tremendous burden is placed on the application designer to manage this pro-
cess, rather than placing the burden on the database software or at least special-
ized database administrators (DBAs).
The second strategy is to exploit data replication. All of the major database prod-
ucts support data replication allowing data to be moved from one system to another.
Data replication can be configured to updates (inserts, updates, deletes) to the replicated
database based on various configurable policies. These policies generally support a
streaming and a batch model for applying the changes. The batch processing is consid-
erably more efficient, but may require a time delay (second through hours) while a suffi-
cient volume of data change occurs to motivate a batch apply process. Replication pro-
cessing generally occurs asynchronously through a “capture and apply” process. The
transaction log entries are read asynchronously (capture) and then these captured logs
are flowed to the target system where they are replayed. This is a widely used strategy.
Because the replication capture and apply processing are asynchronous the possibility of
some data loss does exist with this method.
The third strategy to create standby database servers exploits memory-to-memory
log shipping. This processing is similar in some ways to replication, which also applies
to log changes, but there are major differences:
The log shipping is memory to memory not disk to memory, and is therefore
far more efficient.
Unlike the replication-based scenario where control of the standby server is
really unrelated to the replication process, in these memory-to-memory scenar-
ios the technology is designed specifically for maintaining standby servers, and
therefore in addition to moving data, these features also typically handle rerout-
Search WWH ::




Custom Search