Databases Reference
In-Depth Information
1,024) records. If the
amount of memory available on the system is not sufficient to allocate a buffer for 1M
records, Analysis Services uses a smaller buffer. If your system has enough memory, you
can change the configuration of Analysis Services and make the size of the process buffer
larger by setting the BufferRecordLimit server configuration property.
Analysis Services defines the size of the process buffer as 1M (1,024
×
NOTE
When processing a partition in a measure group that contains a measure with the
DISTINCT_COUNT aggregate function, Analysis Services requests that the data from the
relational database be sorted according to the DISTINCT_COUNT measure. With the
DISTINCT_COUNT aggregate function, Analysis Services aggregates records that have
the same value of the DISTINCT_COUNT measure (and the same set of keys) and,
because records are sorted by value, all records that can be aggregated typically will be
close to each other. Therefore, a large process buffer is unnecessary in this case.
Analysis Services uses a 64K records buffer to process a partition containing a
DISTINCT_COUNT measure.
During partition processing, the same errors can occur that were encountered during
dimension processing, which was discussed earlier in this chapter. You can use the
ErrorConfiguration object to control how Analysis Services treats errors. The single differ-
ence is that, during partition processing, the keys are not required to be unique, and the
KeyDuplicate error cannot occur.
The result of the Process Data job is a process buffer. After Analysis Services fills the
process buffer, it passes the buffer to the Write Data job.
Write Data Job
The Write Data job performs three operations:
.
Analysis Services sorts the data inside the process buffer to cluster and prepare data
for compression and building indexes.
.
Analysis Services divides data records into segments of 64K (65,536) records, analyzes
them, calculates the compression ratio, and then compresses the records. Analysis
Services can typically produce the greatest level of compression on integer values,
whereas floating-point values have the lowest ratio of compression. We recommend
that you use integer or currency data types for measures to speed up partition
processing. In addition, compression of the data that uses the double data type can
lead to a loss of precision when stored in Analysis Services. The compression algo-
rithm could possibly lose a single digit in the fifteenth place of a double-precision
floating-point number. Although you can turn off the DeepCompressValue server
Search WWH ::




Custom Search