Database Reference
In-Depth Information
Now we'll run this code repeatedly with different inputs and review the resulting TKPROF file. We'll run with
100,000 row inserts—committing 1 row at a time, then 10, and so on. The resulting TKPROF files produced the results
in Table 9-1 .
Table 9-1. Results from Inserting 100,000 Rows
Number of Rows
to Insert
Commit Every
N Rows, N=
CPU for Insert
Statement (Seconds)
Wait Time for Log
File Sync (Seconds)
100,000
1
2.30
31.17
100,000
10
2.16
3.48
100,000
100
2.20
0.62
100,000
1,000
2.02
0.08
100,000
10,000
1.46
0.02
100,000
100,000
2.01
0.00
As you can see, the more often you commit, the longer you wait (your mileage will vary on this). And the amount of
time you wait is more or less directly proportional to the number of times you commit. Remember, this is just a single-
user scenario; with multiple users doing the same work, all committing too frequently, the numbers will go up rapidly.
We've heard the same story, time and time again, with similar situations. For example, we've seen how not using
bind variables and performing hard parses often severely reduces concurrency due to library cache contention and
excessive CPU utilization. Even when we switch to using bind variables, soft parsing too frequently—caused by closing
cursors even though we are going to reuse them shortly— incurs massive overhead. We must perform operations only
when we need to—a COMMIT is just another such operation. It is best to size our transactions based on business need,
not based on misguided attempts to lessen resource usage on the database.
There are two factors contributing to the expense of the COMMIT in this example:
We've obviously increased the round-trips to and from the database. If we commit every
record, we are generating that much more traffic back and forth. I didn't even measure that,
which would add to the overall runtime.
Every time we commit, we must wait for our redo to be written to disk. This will result in a
“wait.” In this case, the wait is named “log file sync.”
So, we committed after every INSERT , we waited every time for a short period of time—and if you wait a little
bit of time but you wait often, it all adds up. Fully thirty seconds of our runtime was spent waiting for a COMMIT to
complete when we committed 100,000 times—in other words, waiting for LGWR to write the redo to disk. In stark
contrast, when we committed once, we didn't wait very long (not a measurable amount of time actually). This proves
that a COMMIT is a fast operation; we expect the response time to be more or less flat, not a function of the amount of
work we've done.
So, why is a COMMIT 's response time fairly flat, regardless of the transaction size? It is because before we even go to
COMMIT in the database, we've already done the really hard work. We've already modified the data in the database, so
we've already done 99.9 percent of the work. For example, operations such as the following have already taken place:
Undo blocks have been generated in the SGA.
Modified data blocks have been generated in the SGA.
Buffered redo for the preceding two items has been generated in the SGA.
Search WWH ::




Custom Search