Database Reference
In-Depth Information
and calculate their sum (A + B) in memory than it will be to read three values (A, B, and
the precalculated A + B). That was the guiding principle behind BSo Dense and Sparse.
Dynamic calcs within the dense block were essentially the first-level implementation of
my father's rule. In fact, you might ask why was Essbase designed to calculate anything
in advance? The answer is that computers (i.e., memory) were not large enough to read
all of the data in at one time to allow the “arithmetic” to run without interruptions
(“errands”) to get more data.
now with larger computers and larger memory spaces, ASo completes the implementa-
tion of my father's rule and as a result: In ASO, nothing is precalculated . I should qualify
that statement with a slight addition: In ASO, nothing is precalculated in the level-0 View.
Will there ever be a day when computers are fast enough that we do not need even the precalculations of ASO Aggregated
Views? Not with the current design, where main memory is separated from the CPU and transfer is limited to bus speeds.
There is still an “errand” to run, to fetch the number from memory. Oh, it is far faster than fetching from disk, but still
the errand must be run. Maybe someday a “quantum” computer or some other new design might merge memory and
processing, but until the day when “errands” are eliminated, we all will be slaves to their running to some extent.
In ASo, there is an opportunity to make a choice similar to Dense/Sparse by tagging
one dimension as the “Compression” dimension. I described some of the workings of
the Compression dimension a number of pages back when we discussed the five mem-
bers of the [measures] dimension in ASosamp, but now I will go into the full story.
A dimension tagged as Compression is essentially a one-dimensional “Dense Block.”
unlike BSo, where grouping into a dense block was done to avoid precalculation, our
motivation in ASo is different because nothing is precalculated. As has been seen, ASo
works its magic by attaching a long bitmap with a key length of some multiple of 8 bytes
(let us assume 24 bytes for a medium metadata cube, so we are no longer referring to
ASosamp) onto each 8 bytes of fact data. This long key works like the punched cards to
allow ASo to work at any level of the Stored members of the cube.
The 24 bytes of metadata for each 8 bytes of fact data work out to a cost overhead of
300% for each data element. What happens with Compression is that groups of up to 16
stored fact data members (in the dimension tagged as Compression) are combined to a
total 128 bytes (16 members * 8 bytes) , and that group then has 24 bytes of metadata
attached. This results in only 18.75% (24 ÷ 128) cost overhead for the data grouped by
compression, a significant reduction.
This reduction from 300% to just 18.75% (a 281.75 point reduction) in the overhead
is the best case. ASo groups the data into “bundles” of up to 16, so if there is not some
multiple of 16 stored members in the Compression dimension, the last bundle will be
only partially filled, thereby reducing the overall gain. The bitmap key now serves the
function of the sparse index entry in BSo. Also, if some pieces of fact data do not always
exist for each member of your Compression dimension, ASo will have to leave the cor-
responding “spot” blank.
7.4.3.3.1 Average Bundle Fill (ABF) ASo builds the members of the Compression
dimension into the bundles of 16 in the order in which they first appear in the outline
(it does not matter if they later appear as shared). If any member of a bundle of 16
exists in the fact data, then a “card” with that full set of 16 members is built. given a
dimension with 48 level-0 members, the bundles will be built (Bm = bundle member)
with Bm1-Bm16 on one record, Bm17-Bm32 on another record, and Bm33-Bm48
on another record. now suppose that your fact data is only populated for Bm3, Bm5,
Search WWH ::




Custom Search