Database Reference
In-Depth Information
back to the starting point is identified. The lesson that was learned the hard way has
been that although calculations can be written to clear specific cube regions, on cube1,
in particular, it is much faster in some instances to reload the cube to that week's start-
ing point on Saturday morning and reprocess it completely. The exports are also a key
part of the Dr plan, if the primary backups fail for some reason. If the cube is of any size
at all, dropping the aggregates seems to positively affect the export speed. The code used
to complete the second Saturday task to export the data is:
export database ${APP_NAME}.${DB_NAME} level0 data to data_file
'${APP_NAME}.export.txt';
The third task on Saturday morning is to merge the data slices. While the export
might be slightly more efficient after the merge, the gains in efficiency did not warrant
the assumed risk. Essentially, the export needs to be completed prior to altering the cube
in any significant way to avoid risk of any type. The code used to complete the third
Saturday task to merge all data slices into the main data slice is:
alter database ${APP_NAME}.${DB_NAME} merge all data;
It is good to note here that there are options with regards to how data is merged.
There is another variant of the command to merge all data slices into the main slice and
remove zero values:
alter database ${APP_NAME}.${DB_NAME} merge all data remove_zero_
values;
In this use case, that variation of the command can never be used because meaning-
ful zeros are loaded into the database. It would be detrimental, to say the least, to remove
those zeros programmatically.
There is also a variant of the command to merge the incremental data slices into a
single slice:
alter database ${APP_NAME}.${DB_NAME} merge incremental data;
While this might be useful in other strategies, it is not in this particular use case
because the merge was happening every Saturday. to merge and create smaller incre-
mental Data Slices that would later have to be merged again was too many steps, and
difficult to script in an automated fashion. Do not forget that merging slices requires the
same security privileges that are needed for loading data. This makes sense because the
structure of the data is being altered in a significant way. Statistics regarding the slices
and incremental data are provided in EAS.
1. open EAS.
2. Drill down to the database level of the cube.
3. right-click on the database and select Edit > Properties.
4. Select the Statistics tab.
FigureĀ 5.11 shows a sampling of the statistics for cube1 after the weekly processing is
completed and right before the Saturday morning tasks have started.
This is an excellent place to point out a small curiosity that was discovered while
working with Data Slices. one of the things that caused extreme confusion in the
Search WWH ::




Custom Search