Database Reference
In-Depth Information
For these reasons, a shared server is only appropriate for an OLTP system characterized by short, frequent
transactions. In an OLTP system, transactions are executed in milliseconds; nothing ever takes more than a fraction
of a second. A shared server is highly inappropriate for a data warehouse. Here, you might execute a query that takes
one, two, five, or more minutes. Under a shared server, this would be deadly. If you have a system that is 90 percent
OLTP and 10 percent “not quite OLTP,” then you can mix and match dedicated servers and a shared server on the same
instance. In this fashion, you can reduce the number of server processes on the machine dramatically for the OLTP
users, and make it so that the “not quite OLTP” users do not monopolize their shared servers. In addition, the DBA
can use the built-in Resource Manager to further control resource utilization.
Of course, a big reason to use a shared server is when you have no choice. Many advanced connection features
require the use of a shared server. If you want to use Oracle Net connection pooling, you must use a shared server.
If you want to use database link concentration between databases, then you must use a shared server for those
connections.
If you are already using a connection pooling feature in your application (e.g., you are using the J2ee
connection pool), and you have sized your connection pool appropriately, using a shared server will only be a performance
inhibitor. You already sized your connection pool to cater for the number of concurrent connections that you will get at any
point in time; you want each of those connections to be a direct dedicated server connection. Otherwise, you just have a
connection pooling feature connecting to yet another connection pooling feature.
Note
Potential Benefits of a Shared Server
What are the benefits of a shared server, bearing in mind that you have to be somewhat careful about the transaction
types you let use it? A shared server does three things: it reduces the number of operating system processes/threads,
it artificially limits the degree of concurrency, and it reduces the memory needed on the system. Let's discuss these
points in more detail.
Reduces the Number of Operating System Processes/Threads
On a system with thousands of users, the operating system may quickly become overwhelmed in trying to manage
thousands of processes. In a typical system, only a fraction of the thousands of users are concurrently active at any
point in time. For example, I've worked on systems with 5,000 concurrent users. At any one point in time, at most
50 were active. This system would work effectively with 50 shared server processes, reducing the number of processes
the operating system has to manage by two orders of magnitude (100 times). The operating system can now, to a large
degree, avoid context switching.
Artificially Limits the Degree of Concurrency
Speaking as a person who has been involved in many benchmarks, the benefits of this seem obvious. When running
benchmarks, people frequently ask to run as many users as possible until the system breaks. One of the outputs of
these benchmarks is always a chart that shows the number of concurrent users versus the number of transactions
(see Figure 5-3 ).
 
 
Search WWH ::




Custom Search