Database Reference
In-Depth Information
The problem with such designs is that they are (to say the least) suboptimal from a performance point of view.
In fact, flexibility is tied to performance. When one is at its maximum, the other is at its minimum. In some situations
suboptimal performance might be good enough. But in other situations it might be catastrophic. Hence, you should
use a flexible design only when the required performance can be achieved with it.
Not Using Constraints to Enforce Data Integrity
Constraints (primary keys, unique keys, foreign keys, NOT NULL constraints, and check constraints) are not only
fundamental to guarantee data integrity, but they are also extensively used by the query optimizer during the
generation of execution plans. Without constraints, the query optimizer is not able to take advantage of a number
of optimizations techniques. In addition, checking the constraints at the application level leads to more code
being written and tested as well as to potential problems with data integrity, because data can always be manually
modified at the database level. Also, checking constraints at application level usually requires greater consumption of
resources, and leads to less scalable locking schemes (such as locking an entire table instead of letting the database
lock only a subset of rows). Therefore, I strongly advise application developers to define all known constraints at the
database level.
Lack of Physical Database Design
It is not uncommon to see projects where the logical design is directly mapped to the physical design without taking
advantage of all the features provided by Oracle Database. The most common and obvious example is that every
relation is directly mapped to a heap table. From a performance point of view, such an approach is suboptimal. In
more than a few situations, index-organized tables (IOT), indexed clusters, or hash clusters might lead to better
performance.
Oracle Database provides much more than just the usual b-tree and bitmap indexes. Depending on the situation,
compressed indexes, reverse-key indexes, function-based indexes, linguistic indexes, or text indexes might be very
valuable to improving performance.
For large databases, the implementation of the partitioning option is critical. Most DBAs recognize the option
and its usefulness. A common problem in this area is that developers think that partitioning tables has no impact
on the physical database design. Sometimes this is true, but sometimes this is not the case. As a result I strongly
recommend planning the use of partitioning from the beginning of a project.
Another common issue to deal with during the development of a new application is the definition, and
implementation, of a sound data-archiving concept. Postponing it is usually not an option, because it might impact
the physical database design (if not the logical database design).
Not Choosing the Right Data Type
In recent years, I have witnessed a disturbing trend in physical database design. This trend may be called wrong
datatype selection (such as storing dates in VARCHAR2 instead of using DATE or TIMESTAMP ). At first glance, choosing
the datatype seems to be a very straightforward decision to make. Nevertheless, do not underestimate the number of
systems that are now running and suffering because wrong datatypes were selected.
There are four main problems related to wrong datatype selection:
Wrong or lacking validation of data: The database engine must be able to validate data
that is stored in the database. For example, you should avoid storing numeric values in
character datatypes. Doing so calls for an external validation, which leads to problems
similar to those described in the section “Not Using Constraints to Enforce Data Integrity.”
 
Search WWH ::




Custom Search