Database Reference
In-Depth Information
•  once it is determined that querying the slices is too expensive, there are two
options to resolve this issue:
•  merge together the incremental slices into a single incremental slice; or
•  merge all of the slices with the existing database to recreate a single unit
•  Aggregations will be dropped and the merge process completed (no worries,
the system will issue a reminder if the aggregations were not dropped before the
process was started).
The decision regarding which of these two options is the best choice is based on
numerous factors that might include size of the main database and current manage-
ment process for the cube as a whole. There are many factors to consider that are unique
to each and every installation.
5.5.2.2 Use Case Example This use case example is real and will discuss (demonstrate
where appropriate) and provide production code for each of the following tasks:
•  Creating the main database
•  Creating a Slice
•  updating dimensions
•  merging Slices
•  Clearing Slices
Note: I felt it was incredibly important in this section to provide real production code
as we found a significant amount of issues with code that we tried to use from various
sources when we were first beginning to use Data Slices. I will say that we were in an
early release of this feature and the newer documentation (all sources) appears to be
much more accurate and complete than the sources that we were using.
This use case example involves a set of five cubes being used in retail for analysis
(sorry in advance about having to be so generic in descriptions; it would be a lot more
interesting to be able to disclose the details). The requirements for this implementation
had some unique characteristics that no previous implementation had:
1. Essbase was to be the database of record; data from 25 disparate sources would
be provided. The staging tables would hold one week only and then be cleared.
no other database in the corporate data warehouse would contain the complete
two years of history plus the one year building.
2. The data would need to exist at a very granular level for one dimension; this
would create a dimension that went down six levels and had a million plus
members (with potential for growth).
3. Weekly load volumes were estimated at 50 million records to start, growing
to approximately 300 million records when the implementation was fully
deployed.
4. most data would be created in the staging area on the weekend, but an additional
data load to the staging area would occur on monday and on Thursday. This
data would need to be loaded while the cube was in use, i.e., no downtime for the
load and aggregation process; the cubes would need to be up monday through
Friday.
Search WWH ::




Custom Search