Database Reference
In-Depth Information
Using the services concept discussed in the previous sections, all applications in Table 15-6 are services in the
clustered database SCDB , that is, all applications will have a different service definition in the database.
1.
In Table 15-6 , it should be noted that TAPS is a high priority service application and is
set up to start on SCDB_1 , SCDB_2 , and SCDB_3 instances. If any of these instances fail, the
service from that instance will migrate to instance SCDB_4 . If all three preferred instances
become unavailable, the service is available on instance SCDB_4 . When all three instances
or nodes are not available, SCDB_4 will be busy with all services executing off this one
instance. However, because the priority of TAPS is HIGH , it will get a higher percentage of
the resources compared to other services running on the same node with the exception of
when TICKS is running ( TICKS is discussed in Step 6 following). SCDB_4 will be shared by
both TAPS and FIPS .
2. FIPS is a standard service and is set up to run on instance SCDB_4 ; if SCDB_4 fails, it would
run on either SCDB_2 or SCDB_3 based on the current workload conditions. After failover,
this service will not affect the existing services, especially service TAPS , because TAPS runs
at a higher priority.
3. SSKY is a standard scheduled job (batch) that runs during the night and weekends.
Because this is not an application continuously running, it is configured to run on SCDB_4 .
From the previous step, FIPS is also configured on instance SCDB_4 . Like FIPS , when
the instance SCDB_4 fails, SSKY will failover to either SCDB_3 or SCDB_1 depending on the
current workload conditions. As an alternative solution, FIPS could be set to failover to
SCDB_2 , and SSKY could be set to failover to SCDB_1 .
4. GRUD is a low priority triggered reporting job spawned from both TAPS and FIPS services.
Because of this architecture, it is set up to run across all instances: SCDB_1 , SCDB_2 , and
SCDB_3 . If any of the nodes/instances fail, the surviving nodes will continue to execute the
service; in other words, no failover has been configured.
5. TICKS are a high priority, seasonal application; they are executed twice a month. TICKS
are configured to run on SCDB_3 and SCDB_4 . If there are not sufficient resources to allow
TICKS to complete on time or if one of the preferred instances fails, it has two other spare
instances: SCDB_2 and SCDB_1 .
Once the configuration and layout architecture has been defined, the RAC environment is to be updated to
reflect these settings. Whereas most of the network interface definition and mapping them to their respective nodes
is completed during the Oracle Clusterware configuration, the service to instance mapping is done using one of three
methods listed in the service framework section earlier.
in the workshop, the example used is to implement a distributed workload system using the requirements
listed in table 15-6 , which uses server pools. however, it should be noted that it is not a requirement to have a
policy-managed (server pools) database to implement resource manager.
Note
 
 
Search WWH ::




Custom Search