Databases Reference
In-Depth Information
The previous two examples illustrate the pitfalls in decomposing a table scheme into
smaller schemes. If a decomposition does not cause any information to be lost, it is called
a lossless decomposition. A decomposition that does not cause any dependencies to be
lost is called a dependency-preserving decomposition.
Now it is possible to show that any table scheme can be decomposed, in a lossless way,
into a collection of smaller schemes that are in the very nice BCNF form. However, we
cannot guarantee that the decomposition will preserve dependencies. On the other hand,
any table scheme can be decomposed—in a lossless way that also preserves
dependencies—into a collection of smaller schemes that are in the almost-as-nice third
normal form.
However, before you get too excited, I must hasten to add that the algorithms given do
not always produce desirable results. They can, in fact, create decompositions that are
less intuitive than we might do just using our intuition. Nevertheless, they can be relied
upon to produce the required decomposition, if we can't do it ourselves.
I should conclude by saying that there is no law that says that a database is always more
useful or efficient if the tables have a high degree of normalization. These issues are more
subjective than objective and must be dealt with, as a design issue, on an ad hoc basis. In
fact, it appears that the best procedure for good database design is to mix eight parts
intuition and experience with two parts theory. Hopefully, discussion of normalization
has given you a general feel of the issues involved and will provide a good jumping-off
place if you decide to study these somewhat complicated issues in greater depth. (See
Appendix E for some books for further study.)
Search WWH ::




Custom Search