Database Reference
In-Depth Information
In this example all of the archived redo logs have just been backed up, and there is a redundant backup piece as
well leading to some reclaimable space. If needed, that space is freed automatically.
Flashback logs, which cannot be managed outside the Fast Recovery Area are always managed by Oracle. If
space pressure occurs these logs are among the first to be reused or deleted, even if that means you cannot meet
your flashback retention target. If you rely on flashback logs to rewind the database to a point in time, you need
to set a guaranteed restore point. In this case you should ensure that there is actually enough space in the FRA to
accommodate for all the required archived logs and flashback logs, otherwise you will run out of space and your
database will pause until more space is provided.
When provisioning new databases you should strongly consider the use of the flash recovery area for the reasons
just outlined. The extra work to enable it is marginal—setting two initialization parameters—and the maintenance
benefit is enormous. As an additional plus point you have achieved standardization of your on-disk backup location.
Logical backups
In addition to using RMAN it is possible to take logical backups. These backups are taken using the expdp (Export
Data Pump) utility. The biggest downside to taking logical backups is that you cannot perform a complete restore
with them. In other words, restoring a Data Pump export file requires a database which is up and running. The
effort to create a shell database and then to load the data into it is often too time consuming to be used in real life
situations. Data Pump Exports however are ideally suited for developers to backup up changes to their own schemas
in a development environment to keep reference of code before it was changed. Export Data Pump—unlike its
predecessor—creates the dump file on the database server, and not on the client. Once a Data Pump export has been
taken you should ensure that it is subsequently backed up to disk as well. It is advisable to choose unique file names,
possibly with a time stamp in the name.
Taking a logical backup is quite simple, but needs a little bit of preparation during the database build. First, you
need to have access to a directory object. The easiest way to create one is to do so during the build. If a directory has
not been created, you can do so anytime as shown in the below code example.
SYSTEM@PDB1> create directory EXP_DIR as '/u01/oradata/CDB1/pdb1';
Directory created.
Note how the directory is created in the PDB. In the next step you can define which elements of the PDB you want
to export. A convenient way to do so is to use a parameter file. This file contains name=value pairs and can be used
to store the configuration for specific jobs. An example parameter file could have the following contents to export the
whole metadata for a PDB:
content=metadata_only
directory=EXP_DIR
full=y
job_name=daily_metadata_export
The log file name and dump file name, which are mandatory for data pump exports have not been added to the
parameter file on purpose. They can be passed dynamically at run time to avoid overwriting existing files. A purge
job needs to be included in the daily export task to avoid the disk from filling up. The find command is a great tool to
identify files with a creation timestamp of several days ago and can at the same time be instructed to remove the files
it found.
[oracle@server1]$ expdp /@pdb1 parfile=exp_pdb1.par \
> dumpfile=exp_pdb1_$(date +%F).dmp logfile=exp_pdb1_$(date +%F).log
 
Search WWH ::




Custom Search