Database Reference
In-Depth Information
5. Dimensional updates would be on the weekends only (the cubes could be taken
offline at that time).
6. Essbase Classic Add-In templates would be used for reporting; no ad hoc report-
ing (thank you) and the SLA for report retrievals was less than 30 seconds.
After reviewing these requirements, frankly, I was scared and a little bit leery of
whether this could even be done with Essbase. mind you, I was well past those early
skeptical days I discussed at the beginning of this chapter. I knew what could be done
with ASo cubes, but this exceeded anything I had ever attempted. I did not want to be
cutting edge. The project was too high profile to end up not working. So, what were the
concerns? First, being the database of record was very concerning, especially since it
looked like it was going to be a very large database. not having a corporate data ware-
house to fall back on to reload all of history, should something go wrong, was very
scary. Also, Essbase being the database of record places some additional pressures and
responsibilities on me as the architect. For instance, it is the architect's job to design
a fail-proof back-up plan so that in every instance the data is safe and can be easily
recovered. Another responsibility is to make sure there is a method predetermined for
extracting the data if any down-stream systems later determine that they need this data.
The second concern was the size of the largest dimension. I had never worked with a
million member dimension in a cube; I had only read about them in case studies and
white papers. I could not find anyone amongst my peers using something this large in
real life. I had no idea what to expect, and no one to ask. The final concern was that,
like it or not, I was going to be on the cutting edge (no, bleeding edge) of relatively new
technology. I was going to have to take a leap of faith and use these new Data Slices in
my design to make this all work. What about volume? no, I had already done cubes with
as much volume (or more), so the volume was not a worry. I knew Essbase could handle
that. to try and mitigate some of these concerns (and risks), I did two things before
I committed to this implementation.
First, I built a Proof of Concept (PoC) using a six-dimension model with 10 million
members in the largest dimension and 24 metrics. This was pretty close to what I under-
stood was needed. This model was loaded with two years of data. Existing elements
in the data warehouse were used to provide real distribution within the ASo cube.
We  felt this would make the queries (using default aggregations only) as close to the
final experience as was possible. The results were simply astounding. The speed with
which we could query this cube blew our socks off. no one could actually even believe
it. Completing this PoC confirmed two critical pieces of information:
1. The existing shared platform could handle this size of a cube (although I would
eventually order more memory and disk space).
2. There was a good chance the SLAs being requested could be met (barring any-
thing dramatic changing in the rollout, like 10 dimensions instead of 6, or 200
metrics instead of 24 in the final design).
The second thing I did to try and mitigate my risk was to call my oracle partners.
I asked that they please find me at least one company that had a model this big running
in production. oh, and did I mention I preferred the reference also be running Essbase
on AIx? I did not need to talk to them (there can be much political angst when you
make that kind of request), I just wanted an outline of their dimensionality (number of
members in each dimension) and an assurance that they were running in production.
Search WWH ::




Custom Search