Database Reference
In-Depth Information
attributes that are considered for aggregation and determine if we need to
suggest additional aggregation candidates, for example, because we detected
that most user queries not resolved from cache are resolved by partition reads
rather than aggregation reads. Analysis Services uses the Aggregation Usage
property to determine which attributes it should consider for aggregation.
This property can take one of four values: full (every aggregation for the
cube must include this attribute), none (no aggregation uses the attribute),
unrestricted (the attribute must be evaluated), and default (a rule is applied
to determine if the attribute must be used). The administrator can use this
property to change its value for influencing its use for aggregation.
As we have already explained, partitions must be defined to enable
Analysis Services to access less data to answer a query when it cannot be
answered from the data cache or from aggregations. Data must be partitioned
matching common queries. Analogously to the case of measure groups, we
must avoid partitioning in a way that requires most queries to be resolved
from many partitions. It is recommended by the vendor that partitions
contain at most 20 million records and at least 2 million records. Also,
each measure group should contain fewer than 2,000 partitions. A separate
ROLAP partition must be selected for real-time data and this partition must
have its own measure group.
We can also optimize performance by writing ecient MDX queries and
expressions. For this, run-time checks in an MDX calculation must be avoided.
For example, using CASE and IF functions that must be repeatedly evaluated
during query resolution will result in a slow execution. In that case, it is
recommended to rewrite the queries using the SCOPE function. If possible,
Non Empty Behavior must be used to enable the query execution engine to
use the bulk evaluation mode. In addition, EXISTS rather than filtering on
member properties should be used since this enables bulk evaluation mode.
Too many subqueries must be avoided if possible. Also, if possible, a set
must be filtered before using it in a cross join to reduce the cube space before
performing such cross join.
The cache of the query engine must be used eciently. First, the server
must have enough memory to store query results in memory for reuse in
subsequent queries. We must also define calculations in MDX scripts because
these have a global scope that enables the cache related to these queries to
be shared across sessions for the same set of security permissions. Finally,
the cache must be warmed by executing a set of predefined queries using any
appropriate tool.
Other techniques are similar to the ones used for tuning relational
databases, like tuning memory and processor usage. For details, we refer
the reader to the Analysis Services documentation.
 
Search WWH ::




Custom Search