Database Reference
In-Depth Information
Also, remember that when you use a shared server, the UGA is located in the SGA. This means that when switching
over to a shared server, you must be able to accurately determine your expected UGA memory needs and allocate
appropriately in the SGA via the LARGE_POOL_SIZE parameter. The SGA requirements for the shared server
configuration are typically very large. This memory must typically be preallocated and thus can only be used by the
database instance.
It is true that with a resizable SGa, you may grow and shrink this memory over time, but for the most part,
it will be owned by the database instance and will not be usable by other processes.
Note
Contrast this with a dedicated server, where anyone can use any memory not allocated to the SGA. If the SGA is
much larger due to the UGA being located in it, where does the memory savings come from? It comes from having
that many fewer PGAs allocated. Each dedicated/shared server has a PGA. This is process information. It is sort areas,
hash areas, and other process-related structures. It is this memory need that you are removing from the system by
using a shared server. If you go from using 5,000 dedicated servers to 100 shared servers, it is the cumulative sizes of
the 4,900 PGAs (excluding their UGAs) you no longer need that you are saving with a shared server.
DRCP
So, what about DRCP, the feature (available with Oracle 11 g and above)? It has many of the benefits of a shared server
such as reduced processes (we are pooling), possible memory savings without the drawbacks. There is no chance of
artificial deadlock; for example, the session that holds the lock on the resource in the earlier example would have its
own dedicated server dedicated to it from the pool, and that session would be able to release the lock eventually. It
doesn't have the multithreading capability of a shared server; when a client process gets a dedicated server from the
pool, it owns that process until that client process releases it. Therefore, it is best suited for client applications that
frequently connect, do some relatively short process, and disconnect—over and over and over again; in short, for
client processes that have an API that do not have an efficient connection pool of their own.
Dedicated/Shared Server Wrap-up
Unless your system is overloaded, or you need to use a shared server for a specific feature, a dedicated server will
probably serve you best. A dedicated server is simple to set up (in fact, there is no setup) and makes tuning easier.
With shared server connections, a session's trace information ( SQL_TRACE=TRUE output) may be spread across
many individual trace files; thus, reconstructing what that session has done is more difficult. With the advent of the
DBMS_MONITOR package in Oracle 10 g and above, much of the difficulty has been removed, but it still complicates
matters. also, if you have multiple related trace files generated by a session, you can use the TRCSESS utility to combine
the trace files.
Note
If you have a very large user community and know that you will be deploying with a shared server, I would
urge you to develop and test with a shared server. It will increase your likelihood of failure if you develop under
just a dedicated server and never test on a shared server. Stress the system, benchmark it, and make sure that your
application is well behaved under a shared server. That is, make sure it does not monopolize shared servers for too
long. If you find that it does so during development, it is much easier to fix at that stage than during deployment.
 
 
Search WWH ::




Custom Search