Database Reference
In-Depth Information
it begins using virtual memory, such as the Windows page file or the unix swap file,
which are stored on hard disk drives. memory speed is measured in nanoseconds and
disk speed is measured in milliseconds—virtual memory is a thousand times slower
than rAm.
Essbase server memory rule of thumb: 1 gb per BSo application puts you in a safe
zone; you will note the sizing table is more conservative at 1.5 gB per application. For
aggregate storage option (ASo) applications, use 2 gb per application as the rule of
thumb. note that once the rAm is allocated to an active application, Essbase does not
release it back to the oS until that application is stopped. A highly complex application
may require up to 2 gb for that given application. For instance, one of my clients has
90 applications, 700 users, and their sizing on the Essbase server was 128 gb rAm and
24  cores. I have observed older Essbase version installations that have 10 applications
and 2 gb of total rAm—today's typical applications are more complex, have more data,
and would not function on a total of 2 gb.
1.4.6 Storage
As with memory, disk space is cheap. running out of disk space at least once will cause
a system outage and sometimes can corrupt cubes and/or the security file. keep at least
25% disk space free on your Essbase data drive and expand it as needed and/or archive
old data to retain that percentage of free space.
The number of options for your storage is very wide: internal, direct attached, network
attached storage (nAS), and storage area network (SAn). When using internal disks or
direct attached disks, use rAID 10 (redundant array of inexpensive disks). rAID facili-
tates both data integrity in the form of redundancy as well as speed in the form of mul-
tiple disks used for read and write operations. never use software rAID; software rAID
relies on server CPu resources and will provide suboptimal performance.
When using external storage systems such as nAS or SAn, you want as many spin-
dles (disk drives) as possible while using a rAID mechanism, which supports both reads
and writes equally well. Solid state disks (nAnD memory) are becoming fashionable;
however, the limited number of write cycles the drives are capable of mean there will be
a failure. my belief is due to the amount of data churn in an Essbase application and/or
due to how some may have been implemented it could cause a solid state drive failure
more quickly than other types of applications. my test setup has a direct attached disk
array with a hardware rAID controller and it provides over 1 gb/sec throughput when
using an eight-disk rAID 10 configuration. I have also performed testing with memory
file systems and found no discernible speed increase between my direct attached storage
and rAm; this means a fast physical disk subsystem will provide sufficient bandwidth
and not be a bottleneck. This also means use of solid state disks and/or memory file
systems provide no performance boost over a fast physical disk through Essbase version
11.1.2.1; the future may bring us an Essbase version where a memory file system or solid
state disk does help performance.
A rule of thumb for number of drives in your rAID/SAn supporting Essbase; gener-
ally I start with at least four drives (spindles) in the array that will hold your Essbase
data stores. For every 50 users above 100, add another drive. Consider that, for some
specific Essbase applications, it may be good to dedicate a set of drives to that app only.
Be aware of complexities involved in nAS, SAn, and/or storage for virtual machines.
These are shared storage models where basic design of that storage and/or systems
outside the scope of your Essbase server can affect your performance. SAn topologies
Search WWH ::




Custom Search