Database Reference
In-Depth Information
into Essbase so those cool calculations and amazing we-never-even-considered-doing-
it-in-Essbase analyses can occur.
2.3.3 Unique, Just Like You
your Essbase database has one-of-a-kind data quality issues given your unique busi-
ness environment. given that Essbase Load rules are not the place to fix them, the
approaches and techniques used to guarantee that only good fact data and metadata
are loaded to Essbase depend upon your database's requirements. This chapter cannot
specifically address those circumstances because they are unknown to the author.
What this chapter can and will cover is the single most common data quality issue:
unloaded data. By this I mean data that exists in a fact source and should be loaded to
Essbase, but is instead rejected during the load because Essbase dimension metadata is
missing. Loading all of the data, every time, is the bedrock requirement for data valida-
tion and quality.
2.3.4 Constraints on Everything
Does Essbase know that there is data in a data source for which there is no home in
the target database? Surprisingly, yes. Essbase can figure this out, but only on data load
when it rejects records with unknown members and pipes them to dataload.err. At that
point it is too late; the data load process is finished, but incomplete and, hence, incor-
rect. There is no universal or foolproof way to undo that incorrect data load within
Essbase.
BSo (block storage option) calc scripts and the ASo (aggregate storage option) maxL
“clear data in region” command can clear selected data, but those techniques presume
that you know where the bad data resides unless you are clearing all of the data. you
could load dataload.err back to Essbase, but what happens if you have more errors than
dataload.err traps? Both of these mitigation approaches and others that let incomplete
data get loaded in the first place are imperfect at best and, thus, will result in data that
is still most likely to be bad.
The answer to the above problem of not knowing what and where the bad data exists
is to somehow have a process that compares what metadata is in Essbase and what data
is in the data source before the data load starts. I think of it as checking that there is a
home for every piece of data and, if not, creating one before the load. Alas, we know that
Essbase cannot do this.
2.3.5 Good Data Is Spelled E-T-L
What is needed is a tool that is more robust than Essbase Load rules. This tool must be
able to source any data be it relational, file-based, or otherwise. It must have a robust
transforming functionality that can apply Essbase conforming logic. Finally it must be
able to load that data to Essbase and validate against Essbase.
to guarantee that this data is good before it is loaded into Essbase, referential integrity
between the fact data and the Essbase metadata must be imposed. referential integrity
is a relational database concept that declares and enforces a dependency relationship
between tables. An example of referential integrity can be illustrated with two rela-
tional objects: a Product dimension and fact table. The Product dimension table con-
tains the full Product hierarchy with a primary key on level-0 Products. The fact table
has a Product field that contains level-0 Products with a foreign key assigned to the
Product dimension lookup table. once the relationship is defined, referential integrity
Search WWH ::




Custom Search