Database Reference
In-Depth Information
partitioning, in particular, is an extremely powerful tool. This tool is something that can
often help you achieve:
•  Smaller databases (which translates to faster databases in most cases).
•  Faster loads (with subject areas broken up you can simultaneously load more).
•  Shared data between subject areas (same database can be partitioned to mul-
tiple targets).
•  Improved scalability (databases can be split across processors, or even servers).
on the other hand, you will not want to use partitioning when:
•  There are identified network constraints in your organization; partitioning just
worsens these even if the partitions are on the same server.
•  Complex calculations exist that require knowledge of the total (at top of the
house); a workaround can be developed for this issue, but you will need to do
something to assist Essbase.
•  Databases are not in the same unicode mode or language; while I have never
had this occur, I assume in a multinational corporation the circumstance cer-
tainly could exist.
A real life example might be of interest. In my current work place, I have a subject
area for retail Analytics. The users required a 19-dimension model that had 8 Scenarios.
The four largest dimensions have 8500, 1500, 750, and 450 members with the remaining
dimensions each having less than 50 and 9 of those having less than 10 members each.
Some of the data is reloaded weekly, and some of the data is updated nightly. ASo cubes
were used to meet the requirements of the 19 dimensions. The resulting cube distribu-
tion is as follows:
•  Six ASo cubes hold actuals scenario data with a custom cube to provide it with
top of the house totals (this is my work-around cube needed due to the weak-
ness indicated previously of handling top of the house calculations).
•  Six ASo cubes hold other scenarios: one scenario per cube to reflect varying
refresh rates (I found it is easier to control refreshes and updates if they are
segregated).
•  Four ASo cubes hold custom calculations; one metric per cube due to their
complexity and required refresh rates.
•  one BSo cube ties it all together; the other 16 cubes are partitioned to this one;
this cube holds a minimal amount of data as well and performs as few calcula-
tions as necessary to fulfill all requirements.
This was created after another group had tried unsuccessfully to build just a portion
of this as a single cube. Performance was awful and the refresh time was exceeding the
weekend window. nightly refreshes of required data were completely out of consider-
ation. By simply breaking things up in logical places that could be easily partitioned
back together for the user, the whole solution fell into place. If a scenario needs updates
or repairs, it can be removed from the partition so the rest of the solution is still
usable, while this segment of the solution is being addressed. When the system needs
to be down for maintenance, it is simple to pull the 1 BSo partition out of the group
provisioning so that no one can access the cubes while they are under maintenance.
I am obviously a huge fan of partitioning. Contrary to popular belief, it does not slow
Search WWH ::




Custom Search