Databases Reference
In-Depth Information
Consequently, in order to improve the quality of data, it is
imperative to have a rich data model at your disposal which
takes into account all the links between the data and
independent of artificial boundaries stemming from
functional and technical silos. This is a semantic model,
meaningful to business users as well as to IT experts.
Data quality improves once a company produces a
semantic data model, before any action is taken to clean up
the data. In other words, the use of data quality tools based
on poor data models with a low-level of knowledge about
associations between data, is not a satisfactory solution.
The structural problems of data quality are ever present:
ambiguity concerning the meaning of data, errors during the
exchange of data between silos and with third parties,
different values for the same piece of data duplicated in
several databases, inconsistent data validation rules from
one system to another, etc.
1.4.3. The level of maturity of data quality
Today, the quality of data is handled vis-a-vis its data
value only, which corresponds to the first level of maturity,
one that can be qualified as “basic quality”. This level is not
concerned with the structure of existing databases. It is
about sticking cleansing tools on to what is already in place,
to attempt to correct the quality defects that the silos
continue to make. This is a tactical approach, compensating
for a data duplication situation.
This approach does not take into account the new forces
that are in play in IT systems through “time management”,
“context” and “versioning” (see section 1.4.1). This level alone
is insufficient to improve the sustainability of data quality
because processes continue to update databases of mediocre
quality.
Search WWH ::




Custom Search