Database Reference
In-Depth Information
In this example, I will set the DB_16K_CACHE_SIZE and restart, since I'm using automatic shared memory
management and don't wish to set any of the other caches manually:
EODA@ORA12CR1> alter system set sga_target=300m scope=spfile;
System altered.
EODA@ORA12CR1> alter system set db_16k_cache_size = 16m scope=spfile;
System altered.
EODA@ORA12CR1> connect / as sysdba
Connected.
SYS@ORA12CR1> startup force
ORACLE instance started.
ORACLE instance started.
Total System Global Area 313159680 bytes
Fixed Size 2287864 bytes
Variable Size 180356872 bytes
Database Buffers 125829120 bytes
Redo Buffers 4685824 bytes
Database mounted.
Database opened.
SYS@ORA12CR1> show parameter 16k
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_16k_cache_size big integer 16M
So, now I have another buffer cache set up: one to cache any blocks that are 16KB in size. The default pool will
consume the rest of the buffer cache space, as you can see by querying V$SGASTAT . These two buffer caches are
mutually exclusive; if one “fills up,” it can't use space in the other. This gives the DBA a very fine degree of control over
memory use, but it comes at a price. That price is complexity and management. These multiple block sizes were not
intended as a performance or tuning feature (if you need multiple caches, you have the default , keep and recycle
pools already), but rather came about in support of transportable tablespaces—the ability to take formatted data files
from one database and transport or attach them to another database. They were implemented in order to take data
files from a transactional system that was using an 8KB block size and transport that information to a data warehouse
using a 16KB or 32KB block size.
The multiple block sizes do serve a good purpose, however, in testing theories. If you want to see how your
database would operate with a different block size—how much space, for example, a certain table would consume if
you used a 4KB block instead of an 8KB block—you can now test that easily without having to create an entirely new
database instance.
You may also be able to use multiple block sizes as a very finely focused tuning tool for a specific set of segments,
by giving them their own private buffer pools. Or, in a hybrid system, transactional users could use one set of data
and reporting/warehouse users could query a separate set of data. The transactional data would benefit from the
smaller block sizes due to less contention on the blocks (less data/rows per block means fewer people in general
would go after the same block at the same time) as well as better buffer cache utilization (users read into the cache
only the data they are interested in the single row or small set of rows). The reporting/warehouse data, which might
be based on the transactional data, would benefit from the larger block sizes due in part to less block overhead
(it takes less storage overall) and larger logical I/O sizes perhaps. And since reporting/warehouse data does not
 
Search WWH ::




Custom Search