Database Reference
In-Depth Information
In the calculation script header, Bill inserted the following:
SET LOCKBLOCK HIGH;
SET AGGMISSG ON;
SET CACHE HIGH;
SET CALCPARALLEL 4;
SET UPDATECALC OFF;
Bill did not ask anything about the server memory or processors, the number of concur-
rent users, or what the calculation was supposed to do. For Bill, testing was not a priority.
In any event, the customer just stared. They were mesmerized. you would have thought
Bill was rain man. Did it decrease the calculation time? of course not. Sadly, this story
defines the state of BSo tuning today. Somehow the bigger-is-better philosophy became
the rule. The simple truth is that there is no one size fits all when it comes to BSo tuning.
to illustrate my point, let us start the tuning discussion by describing the caches that
can make a difference. Consider this a quickie discussion for setting caches for those
who do not have three weeks to dedicate to benchmarking. The object here is to make
the system as fast as possible in a reasonable time frame, rather than make it worse.
keep in mind that improving calculation time may slow user reporting and vice versa;
the optimum configuration is likely to be a compromise. Finally, when the operating
system is 32 bit, there is a hard limit of 4 gb of memory. Exceed this limit and bad things
will happen. Whether the server is 64 or 32 bit, never allocate more resources than are
available on the server.
most often, caches are tuned in a vacuum rather than real world conditions. unfortu-
nately, the data, data file, index, and operating system caches are shared by everyone using
the database. Each request competes for resources. testing processes in isolation might
pinpoint best case performance, but probably not real-world results. For simplicity, I suggest
that initial tests be done individually for expediency.
For final testing, it is important to determine that the test simulates the production
environment complete with activity. Are calculations scripts and reports both to be
tested? Is the purpose the test to determine best, average, or worse case performance?
how many concurrent calculations? Are the calculations batch or interactive? how
many concurrent calculations and reports will be tested? If there is no stress testing
software available, execute multiple maxL processes from a command file.
4.6.1 Index Cache
This is the easiest cache to set particularly in 64 bit. Place the entire index in memory. For
those in 32 bit or, in a tight memory situation where the index is really large, test with 50 or
75% of the index in memory. For testing, aggregate the sparse dimensions increasing the
index cache size with each test. Start the testing with the upper blocks calculated or not,
but be consistent. If the upper blocks are cleared prior to the test, I have seen situations
where a dense restructure will shrink the PAg file and speed the aggregation. Stopping
and starting the database will flush the caches. under no circumstances set the cache
larger than the index because Essbase will allocate everything that is specified. Figure 4.5
captures the physical index size for Sample.Basic is 8,216,576 bytes as shown in EAS.
Figure 4.6 shows that the index cache setting is 1,024,000 and the current value is also
1,024,000. As an experiment, I increased the index cache from 1024 kb to double the
physical index size of 16,433 kb. Be certain to stop/start the database after changing the
cache settings and then execute a calculation. Figure 4.7 shows the results.
Search WWH ::




Custom Search