Database Reference
In-Depth Information
•  Sunday Aternoon:
•  Add Current Week Data Slices to cube1 and cube2.
•  monday morning:
•  Add Data Slice to cube2.
•  Thursday Evening:
•  Add Data Slice to cube2.
•  Saturday morning: Start the Process All over Again.
This is the final process and in its current state, it is not exactly how it existed when
it was originally put into production. A few steps had to be altered as discoveries were
made on what does and does not work well with slices in real life with production vol-
umes. A review of this information and discovery process is included in the next few
pages. In addition, as each component of the process is discussed, the code relevant to
that part of the process is also provided.
5.5.2.2.1 The Saturday Morning Process In the original deployment, the starting
concept was how very cool it would be to keep the aggregations each week and not have
to rebuild them. After all, this is an advantage of using Data Slices, and aggregations
take up the most time in the ASo cube-building process. In real life, there were two
complications with keeping the aggregations. The first complication was that the system
required you to drop the aggregations to merge the slices. Because the team had no
idea how long you could legitimately go without merging slices before performance was
affected, and the process needed to be automated, and there was an entire weekend that
could be used for processing, proceeding with caution and merging weekly seemed to
be the best solution. The second complication was more practical: dimension builds take
forever and a day if you perform them on a large aggregated cube. It is really not the
dimension build that takes so long, but the restructure. This was an initial mistake that
was made out of ignorance. In the original process, the reaggregation of all the cubes
was done on Saturday nights, and the dimension builds were completed on Sunday
mornings when the files were available. on cube1 the dimension builds were excessive
even when the cube had only one or two weeks of data, and they got worse as data was
added. Each week that was added caused the build and restructure process to extend one
to two hours more. It was very quickly recognized that it would not take many weeks to
blow the timeline right out of the water, and by year's end this step in the weekly update
would be taking until Wednesday to finish. hence, a change was made to the process
to accommodate these real life lessons learned. The small cubes are still aggregated on
Saturday because the system can handle the time these take from a resource perspective.
The servers are much busier on Sundays and getting the small cubes out of the way is
one less task that has to be done. The code used to complete the first Saturday task to
drop the aggregations is:
alter database ${APP_NAME}.${DB_NAME} clear aggregates;
The second task on Saturday mornings is to export the data. The export is a critical
part of the disaster recovery (Dr) plan and it has already been implemented several
times. Essentially, if the EtL (extract, transform, and load) folks provide the wrong data
and too many steps are processed, there may come a moment that the need to revert
Search WWH ::




Custom Search