Database Reference
In-Depth Information
We can observe that even though we allowed for up to 1GB of memory to the SORT_AREA_SIZE , we really only
used about 10MB. This shows that the SORT_AREA_SIZE setting is an upper bound, not the default and only allocation
size. Also, we only performed one sort again, but this time it was entirely in memory; there was no temporary space on
disk used, as evidenced by the lack of physical I/O.
If you run this same test on various versions of Oracle, or perhaps even on different operating systems, you might
see different behavior, and I would expect that your numbers in all cases would be a little different from mine. But the
general behavior should be the same. In other words, as you increase the permitted sort area size and perform large
sorts, the amount of memory used by your session will increase. You might notice the PGA memory going up and
down, or it might remain constant over time, as just shown. For example, if you were to execute the previous test in
Oracle8 i , I'm sure you'd notice that PGA memory does not shrink back in size (i.e., the SESSION PGA MEMORY equals
the SESSION PGA MEMORY MAX in all cases). This is to be expected, as the PGA is managed as a heap in 8 i releases and
is created via malloc() -ed memory. In 9 i and above, new methods attach and release work areas as needed using
operating system-specific memory allocation calls.
Here are the important things to remember about using the *_AREA_SIZE parameters:
These parameters control the maximum amount of memory used by a
SORT , HASH , or BITMAP
MERGE operation.
A single query may have many operations taking place that use this memory, and multiple
sort/hash areas could be created. Remember that you may have many cursors opened
simultaneously, each with its own SORT_AREA_RETAINED needs. So, if you set the sort area
size to 10MB, you could use 10, 100, 1,000 or more megabytes of RAM in your session. These
settings are not session limits; rather, they are limits on a single operation, and your session
could have many sorts in a single query or many queries open that require a sort.
The memory for these areas is allocated on an “as needed” basis. If you set the sort area size to
1GB as we did, it doesn't mean you'll allocate 1GB of RAM. It only means that you've given the
Oracle process the permission to allocate that much memory for a sort/hash operation.
Automatic PGA Memory Management
Starting with Oracle9 i Release 1, a new way to manage PGA memory was introduced that avoids using the
SORT_AREA_SIZE , BITMAP_MERGE_AREA_SIZE , and HASH_AREA_SIZE parameters. It was introduced to address a few
issues:
Ease of use : Much confusion surrounded how to set the proper *_AREA_SIZE parameters.
There was also much confusion over how those parameters actually worked and how memory
was allocated.
Manual allocation was a “one-size-fits-all” method : Typically, as the number of users running
similar applications against a database went up, the amount of memory used for sorting and
hashing went up linearly as well. If 10 concurrent users with a sort area size of 1MB used
10MB of memory, 100 concurrent users would probably use 100MB, 1,000 would probably
use 1000MB, and so on. Unless the DBA was sitting at the console continually adjusting the
sort/hash area size settings, everyone would pretty much use the same values all day long.
Consider the previous example, where you saw for yourself how the physical I/O to temp
decreased as the amount of RAM we allowed ourselves to use went up. If you run that example
for yourself, you will almost certainly see a decrease in response time as the amount of RAM
available for sorting increases. Manual allocation fixes the amount of memory to be used
for sorting at a more or less constant number, regardless of how much memory is actually
available. Automatic memory management allows us to use the memory when it is available; it
dynamically adjusts the amount of memory we use based on the workload.
 
Search WWH ::




Custom Search