Database Reference
In-Depth Information
syntax. Aggregations are adjusted automatically, so this is not an issue that needs to be
addressed.
That is a good solution when all six slices need to be replaced. What can be done to fix
just one Data Slice? Is there anything built in for that situation? yes and no. one solution
that has been implemented with success is to load the original file with a subtract values
statement. This requires that the original data is available for use in the reload (they need
to have not already fixed the staging table):
import database ${APP_NAME}.${DB_NAME} data from load_buffer with
buffer_id 1 subtract values create slice;
While this process does create an additional slice, it reverses out the values and then
the reload of the new corrected data creates yet another slice. When the merge is done
there could conceivably be a few extra zeros in the database, but in this use case, there
are so many zeros already this is probably not an issue.
What about fixing data once the bad data has been merged into the main database?
The solution to this problem is no doubt the second greatest addition to ASo cube
capabilities in the last few releases. In the documentation, this feature is described
as “Clearing Data from Specific regions of Aggregate Storage Databases.” This was
a remarkable advancement for ASo cubes and the strength of this feature cannot be
touted highly enough. Again, understanding what was originally done may help prove
this point. up until the first release offering this feature, if existing data in a cube needed
to be changed, a process similar to this would have to be implemented:
1. Extract the “bad data” using a report Script.
2. Clear the “bad data” intersections by taking the extract file and reloading it
(to  get all the proper intersections where the “bad data” lived), replacing the
existing data values with #mISSIng.
3. Load the new “good data.”
I actually have run this process for years because it was the only option available.
It works well, but on a large cube, it could take as much as six hours to fully process.
Fast-forward to present day and watch the number of steps go down. Another great
thing about this new process for clearing data is that the data is physically removed from
the database. using the old method, intersections were not removed; their values were
simply replaced with #mISSIng values. In light of what has already been revealed with
regard to Data Slices, there are a number of things to keep in mind when working with
the clearing process.
to use the clear data region statement, a specific region of the database where the
clear is to occur must be identified as an “mDx set expression.” It is a good idea to
write an mDx Query using the correct set expression to confirm the records it retrieves
are the records to be deleted. once the set expression works in a query, then it can be
executed in the clear data region statement.
The second thing to keep in mind is that a number of things will automatically happen
if there are multiple slices in the cube when this command is issued. The system will:
•  merge the slices
•  Clear the specified region
•  rematerialize all aggregate views
Search WWH ::




Custom Search