Database Reference
In-Depth Information
A query that was to find the combination of me (Dan) and my brothers, norman and
Eric, and to compute our combined ages, would (in the simplest technical implementa-
tion) have to make three passes through the stack of fact data cards: one to find Dan,
another for Eric and a third for norman. Each time, the age would be grabbed and a
running total calculated. Alternatively, a smarter sorting-needle would make one pass,
checking each card to see if it was for Dan or Eric or norman, and again, totaling the
age for each card found. Even better, if you knew that the three brothers were the only
descendants of Abraham, then a single pass could be made.
The computer version of these queries, of course, does not utilize a sorting-needle,
but uses the bitmap mask. It should be noted that this comparison of a set of bits to a
“mask” is something that computers have been designed to do extremely quickly. In
fact, there is often a machine-level hardware operation that allows a range of keys to be
evaluated by a mask and the data portion summed.
7.3.3 The Essbase ASO Implementation of the Card Box: The Importance of Memory Space
of course (returning to the analogy of the physical cards above), all of the cards must be
in one “box” and the length of the sorting-needle long enough to pass through the entire
box. otherwise the queries will be slowed down as each of the “boxes” in the database is
picked up and the needle passed through. In case you have not guessed already, the size
of the box is analogous to the amount of memory available. This brings us to the first
Rule of ASO Designing for Performance:
r1: The Input-level and Aggregation-data for all loaded ASo cubes should fit into
memory (or it ain't really ASo).
In other words, you should have enough rAm available to accommodate the total
size of the ASo .dat files for the cubes you plan to run. The .dat file size also can be found
on the Statistics tab in the Database Properties dialog in EAS by adding “Input-level
data Size” and “Aggregate Data Size” (see Figure 7.5).
okay, at this point some of you who have already built ASo cubes are going to be
quite surprised. you may have already built some large ASo cubes that could not fit into
memory or that could not fit after you added a lot of aggregations. But think back, why
was Essbase and BSo invented in the first place? Because the analysis of a large SQL
database took too long. This was for two reasons: (1) the star schema on which it was
most likely based had to be traversed, and (2) because all of the tables of that schema
had to be read off the disk. BSo was an answer because it blended what was essentially a
normalization of the fact data into two groups (the dense and the sparse) and combined
the metadata descriptions into the outline. That took care of the star schema. Then BSo
went on to precompute all of the hierarchies' combinations so they would be available
quickly without reading all of the input data.
When Essbase version 5 came out, dynamic calculation was added, which was most
often used for calculations within the dense block. This recognized that computers were
larger and had more memory, thus more data could fit into that memory and be calculated
on the fly. reading the data does not take long if it is already in memory. ASo takes that
one step farther and does all of its calculations on the fly. If that data has to be read of the
disk drive, you are little better off than with a SQL database. I will return to this topic later
 
Search WWH ::




Custom Search