Database Reference
In-Depth Information
employees and has no natural hierarchy. If the employees are listed alphabetically, then
create a level of the alphabet. keep splitting the letters into subletter groups until the
calculation performance is reasonable.
4.4.2.5 Databases with Many Attribute Dimensions Based on Large Sparse Dimensions Attri-
bute dimensions are glorified sparse dynamic calculations. reporting on one or two
attribute dimensions might seem fast, but do not assume that there is not a limit. not
that many years ago, I was involved with a project that had a sparse dimension that had
25,000 level-0 members. oh, the joy! Associated with the dimension were 32 attributes
dimension. more joy! Sure, we knew that the design was suspect, but we just could not
convince the client that reporting could get pretty slow. Besides, how many attributes
could an analyst specify on a single report? We did not need to wait long to find out that
the favorite report had all 32 attributes specified. not only was reporting slow, Essbase
crashed and Planning died. The moral of the story is that there is a practical limit for the
number of attribute dimensions depending on the number of members in the associated
dimensions and the number of concurrent users. For this particular database, we found
the limit to be about four attributes.
4.4.3 Block Size Mysteries
The number one issue that determines whether a BSo database is fast or slow is disk
Input/output (I/o). reads and writes of data are the limiter of performance. The best
way to create fewer blocks is to place as much data as possible into each block. That
sounds pretty simple does it not? Actually, it is a little more complicated, but the goal is
still the same, reduce the number of blocks.
First, let us talk about the physical size limitations of the block. The administrators
guide for 11.1.2 states that the size should be between 8 and 100 kb. For clarity, the
block size is determined by multiplying the number of stored members of the dense
dimensions. That product is then multiplied by 8 bytes. As an example, assume the
dense dimensions are measures, year, and Scenario, which have 8, 14, 3 stored members,
respectively. The block size would be 8 × 14 × 3 × 8 bytes or 2688 bytes.
now, do not get hung up on the size too much. This 8-100 k is just a guideline and
exceptions are common. I have seen databases with blocks of 250 k bytes that work
just  fine. In this scenario, a balance sheet database had 250 departments and 1500
cost centers. Even though the blocks were large there were not very many of them.
Aggregations were quick, but reporting could be slow if the report listed all of the cost
centers on the row axis. By slow, I mean 5 to 10 second retrievals instead of 2 to 3 sec-
onds. For large blocks, it is important to adjust the database caches so that Essbase can
load a sufficient number in memory. I will cover that in depth later in the chapter.
When it comes to choosing the dense dimensions, I have seen all kinds of methods.
one method is to try every combination of dimensions until the smallest PAg file is
achieved. That seems like a lot of work and the end result might be the densest database,
but not necessarily the fastest. This approach is less tedious. Following are the guidelines
for determining whether a dimension should be dense or not:
1. Choose dimensions that aggregate dynamically. The best way to save data space
is to not store the data in the first place. Dynamic calculations are in-memory
calculations and are very fast. They will utilize additional processors if they are
Search WWH ::




Custom Search