Database Reference
In-Depth Information
Excessive blocking caused by isolation can adversely affect the scalability of a database application. A transaction
may inadvertently block other transactions for a long period of time, thereby hurting database concurrency. Since
SQL Server manages isolation using locks, it is important to understand the locking architecture of SQL Server. This
helps you analyze a blocking scenario and implement resolutions.
the fundamentals of database locks are explained later in the chapter in the “Capturing Blocking
information” section.
Note
Durability
Once a transaction is completed, the changes made by the transaction should be durable. Even if the electrical
power to the machine is tripped off immediately after the transaction is completed, the effect of all actions within the
transaction should be retained. SQL Server ensures durability by keeping track of all pre- and post-images of the data
under modification in a transaction log as the changes are made. Immediately after the completion of a transaction,
SQL Server ensures that all the changes made by the transaction are retained—even if SQL Server, the operating
system, or the hardware fails (excluding the log disk). During restart, SQL Server runs its database recovery feature,
which identifies the pending changes from the transaction log for completed transactions and applies them to the
database resources. This database feature is called roll forward.
The recovery interval period depends on the number of pending changes that need to be applied to the database
resources during restart. To reduce the recovery interval period, SQL Server intermittently applies the intermediate
changes made by the running transactions as configured by the recovery interval option. The recovery interval option
can be configured using the spconfigure statement. The process of intermittently applying the intermediate changes
is referred to as the checkpoint process. During restart, the recovery process identifies all uncommitted changes and
removes them from the database resources by using the pre-images of the data from the transaction log.
The durability property isn't a direct cause of most blocking since it doesn't require the actions of a transaction to
be isolated from those of others. But in an indirect way, it increases the duration of the blocking. Since the durability
property requires saving the pre- and post-images of the data under modification to the transaction log on disk, it
increases the duration of the transaction and blocking.
Introduced in SQL Server 2014 is the ability to reduce latency, the time waiting on a query to commit and write to
the log, by modifying the durability behavior of a given database. You can now use delayed durability. This means that
when a transaction completes, it reports immediately to the application as a successful transaction, reducing latency.
But the writes to the log have not yet occurred. This may also allow for more transactions to be completed while still
waiting on the system to write all the output to the transaction log. While this may increase apparent speed within the
system, as well as possibly reducing contention on transaction log I/O, it's inherently a dangerous choice. This is a
difficult recommendation to make. Microsoft suggests three possible situations that may make it attractive.
You don't care about the possible loss of some data : Since you can be in a situation where you
need to restore to a point in time from log backups, by choosing to put a database in delayed
durability you may lose some data when you have to go to a restore situation.
You have a high degree of contention during log writes : If you're seeing a lot of waits while
transactions get written to the log, delayed durability could be a viable solution. But, you're
also going to want to be tolerant of data loss, as discussed earlier.
You're experiencing high overall resource contention : A lot of resource contention on the server
comes down to the locks being held longer. If, you're seeing lots of contention and you're
seeing long log writes or also seeing contention on the log and you have a high tolerance for
data loss, this may be a viable way to help reduce the system's contention.
 
 
Search WWH ::




Custom Search