Database Reference
In-Depth Information
noted the problems of update, deletion, and addition anomalies. So what is the next
step? Do you simply abandon the initial data model and look for other methods?
Your goal is to create a good relational model even while you attempt to do this
directly from information requirements.
It turns out that by adopting a systematic methodology you can, indeed,
regularize the initial data model created by the first attempt. This methodology is
based on Dr. Codd's approach to normalizing the initial tables created in a random
manner directly from information requirements. Before getting into the actual
methodology of normalization, let us consider its merits and note how the method-
ology is used.
Purpose and Merits
Normalization methodology resolves the three types of anomalies encountered
when data manipulation operations are performed on a database based on an
improper relational data model. Therefore, after applying the principles of normal-
ization to the initial data model, the three types of anomalies will be eliminated.
The normalization process standardizes or “normalizes” the table structures. You
come up with revised table structures. It is a systematic, step-by-step methodology.
Normalization
Creates well-structured relations
Removes data redundancies
Ensures that the initial data model is properly transformed into a relational
data model conforming to relational rules
Guarantees that data manipulation will not result in anomalies or other
problems
How to Apply This Method
As mentioned above, this normalization process is a step-by-step approach. It is not
completed in one large task. The process breaks down the problem and applies
remedies, one at a time. The initial data model is refined and standardized in a clear
and systematic manner, one step at a time.
At each step, the methodology consists of examining the data model, removing
one type of problem, and changing it to a better normal form. You take the initial
data model created directly from information requirements in a random fashion.
This initial model, at best, consists of two-dimensional tables representing the entire
data content, nothing more or less. As we have seen, such an initial data model is
subject to data manipulation problems.
You apply the principles of the first step. In this step, you are examining the initial
data model for only one type of nonconformance and seeking to remove one type
of irregularity. Once this one type of irregularity is resolved, your data model
becomes better and is rendered into a first normal form of table structures. Then
you look for another type of irregularity in the second step and remove this type of
irregularity from the data model resulting from the step. After this next step, your
data model becomes still better and becomes a data model in the second normal
Search WWH ::




Custom Search