Database Reference
In-Depth Information
The application issues database calls through the JDBC driver. The JDBC replay driver remembers each call in
a request and submits the calls to the database. In the case of failures, JDBC replay driver uses the Transaction
Guard feature to identify the global transaction state. If the transaction is not committed, then the JDBC replay
driver replays the captured calls. After executing all database calls, replay driver issues a commit and terminates
replay mode.
To support Application Continuity, service must be created by setting a few service attributes, namely, failovertype,
failoverretry, failoverdelay, replay_init_time, etc. The following command shows an example of service creation.
Notice that failovertype is set to TRANSACTION; this is an additional failovertype value introduced by version 12c.
$ srvctl add service -db orcl12 -service po -preferred oel6vm1 -available oel6vm2 \
-commit_outcome TRUE -retention 86400 -failovertype TRANSACTION \
-failoverretry 10 -failoverdelay 5 -replay_int_time 1200
Not all application workloads can be safely replayed. For example, if the application uses autonomous
transactions, then replaying the calls can lead to duplicate execution of autonomous transactions. Applications must
be carefully designed to support Application Continuity.
Policy-Managed Databases
Traditionally, Clusterware resources (databases, services, listeners, etc.) are managed by database administrators, and
this type of management is known as administrator-managed databases. Version 11.2 introduced policy-managed
databases. This feature is useful in a cluster with numerous nodes (12+) supporting many different databases and
applications.
With a policy-managed database, you create server pools, assign servers to the server pool, and define policies of
server pools. Depending upon the policies defined, servers are moved into and out of server pools. For example, you
can define a policy such that the online server pool has higher priority during the daytime and the batch server pool
has higher priority in the nighttime. Clusterware will manage the servers in the server pool such that more resources
can be allocated, matching workload definitions.
By default, two server pools, that is, free and generic server pools, are created. Server pool free is a placeholder for
all new servers, and as the new server pools are created, servers are automatically reassigned from free server pool to
new server pools. Generic server pool hosts pre-11.2 databases and administrator-managed databases.
You can associate applications to server pools, and a database is also an application from the Clusterware perspective.
An application can be defined as singleton or uniform state. If an application is defined as singleton, then that application
can exist only in a server, and if defined as uniform, then that application will exist on all servers in a server pool.
Temporary Tablespaces
Temporary tablespaces in RAC require special attention, as they are shared between the instances. Temporary
tablespaces are divided into extents and instances cache subset of extents in their SGA. When a process tries to
allocate space in the temporary tablespace, it will allocate space from cached extents of the current instance.
Dynamic performance view gv$temp_extent_pool shows how temporary tablespace extents are cached.
Instances try to cache extents equally from all files of a temporary tablespace. For example, approximately 4,000
extents from files 6, 7, 8, and 9 5 were cached by every instance. So, you should create temporary tablespaces with as
many temp files as there are instances. In a nutshell, extents are cached from all temporary files, thereby spreading the
workload among the temporary files of a temporary tablespace.
5 Only partial output is shown. Files 1 through 5 also exhibit similar caching behavior, but extents in use were 0 for those files when
the view was queried. Thus, the output of those files is not shown.
 
Search WWH ::




Custom Search