Database Reference
In-Depth Information
If your query performance degenerates as the number of concurrent users running
queries increases, consider scaling-out by implementing what's known as an OLAP
farm. This architecture is widely used in large implementations and involves multiple
Analysis Services instances on different servers, and using network load balancing to
distribute user queries between these servers. Each of these instances needs to have the
same database on it and each of these databases must contain exactly the same data in
it for queries to be answered consistently. This means that as the number of concurrent
users increases you can easily add new servers to handle the increased query load. It
also has the added advantage of removing a single point of failure, so if one Analysis
Services server fails then the others take on its load automatically.
Making sure that data is the same across all servers is a complex operation and you
have a number of different options for doing this: you can either use the Analysis
Services database synchronization functionality, backup and restore, copy and paste
the data from one location to another using a tool like Robocopy, or use the Analysis
Services shared scalable database functionality. The following white paper describes
how some of these options can be used to implement a network load balanced OLAP
farm for Analysis Services: http://tinyurl.com/nlbssas
Shared scalable databases have a significant advantage over synchronization and
file-copying in that they don't need to involve any moving of files at all. They can
be implemented using the same approach described in the white paper earlier,
but instead of copying the databases between instances you process a database
(attached in ReadWrite mode) on one server, detach it from there, and then attach
it in ReadOnly mode to one or more user-facing servers for querying while the
files themselves stay in one place. You do, however, have to ensure that your disk
subsystem does not become a bottleneck as a result.
Summary
In this chapter we've learned how to optimize the performance of our cube so our
users' queries execute as quickly as possible. We've looked at how Analysis Services
processes queries and what the roles of the Storage Engine and the Formula Engine
have in this; we've seen how building partitions and aggregations can improve
the performance of the Storage Engine, and we've also seen how to write MDX to
ensure that the Formula Engine works as efficiently as possible. Last of all, we've
seen how important caching is to overall query performance and what we need to
do to ensure that we can cache data as often as possible, and we've discussed how to
scale-out Analysis Services using network load balancing to handle large numbers of
concurrent users. In the next chapter, we'll take a look at how to apply security to the
data in a cube.
 
Search WWH ::




Custom Search