Database Reference
In-Depth Information
So, the bottom line is, only use deferrable constraints where you have an identified need to use them. They
introduce subtle side effects that could cause differences in your physical implementation (non-unique vs. unique
indexes) or in your query plans—as just demonstrated!
Bad Transaction Habits
Many developers have some bad habits when it comes to transactions. I see this frequently with developers who have
worked with a database that “supports” but does not “promote” the use of transactions. For example, in Informix (by
default), Sybase, and SQL Server, you must explicitly BEGIN a transaction; otherwise, each individual statement is a
transaction all by itself. In a similar manner to the way in which Oracle wraps a SAVEPOINT around discrete statements,
these databases wrap a BEGIN WORK / COMMIT or ROLLBACK around each statement. This is because, in these databases,
locks are precious resources, and readers block writers and vice versa. In an attempt to increase concurrency, these
databases want you to make the transaction as short as possible—sometimes at the expense of data integrity.
Oracle takes the opposite approach. Transactions are always implicit, and there is no way to have an
“autocommit” unless an application implements it (see the “Using Autocommit” section later in this chapter for more
details). In Oracle, every transaction should be committed when it must and never before. Transactions should be as
large as they need to be. Issues such as locks, blocking, and so on should not really be considered the driving forces
behind transaction size—data integrity is the driving force behind the size of your transaction. Locks are not a scarce
resource, and there are no contention issues between concurrent readers and writers of data. This allows you to have
robust transactions in the database. These transactions do not have to be short in duration—they should be exactly
as long as they need to be (but no longer). Transactions are not for the convenience of the computer and its software;
they are to protect your data.
Committing in a Loop
Faced with the task of updating many rows, most programmers will try to figure out some procedural way to do it in a
loop, so that they can commit every so many rows. I've heard two (false!) reasons for doing it this way:
It is faster and more efficient to frequently commit lots of small transactions than it is to
process and commit one big transaction.
We don't have enough undo space.
Both of these reasons are misguided. Furthermore, committing too frequently leaves you prone to the danger
of leaving your database in an “unknown” state should your update fail halfway through. It requires complex logic to
write a process that is smoothly restartable in the event of failure. By far the best option is to commit only as frequently
as your business processes dictate and to size your undo segments accordingly.
Let's take a look at these issues in more detail.
Performance Implications
It is generally not faster to commit frequently—it is almost always faster to do the work in a single SQL statement. By
way of a small example, say we have a table, T , with lots of rows, and we want to update a column value for every row
in that table. We'll use this to set up such a table (run these four setup steps before each of the following three cases):
EODA@ORA12CR1> drop table t;
Table dropped.
EODA@ORA12CR1> create table t as select * from all_objects;
Table created.
 
Search WWH ::




Custom Search