Databases Reference
In-Depth Information
All directory objects are owned by the SYS user. If you're using a user account that has the DBA role granted to it,
then you have the requisite read/write privileges on any directory objects. I usually perform Data Pump jobs with a
user that has the DBA granted to it (so that I don't need to bother with granting access).
SeCUrItY ISSUeS WIth the OLD eXp UtILItY
the idea behind creating directory objects and then granting specific I/O access to the physical storage location
is that you can more securely administer which users have the capability to generate read and write activities
when normally they wouldn't have permissions. With the legacy exp utility, any user that has access to the tool by
default has access to write or read a file to which the owner (usually oracle ) of the Oracle binaries has access.
It's conceivable that a malicious non- oracle OS user can attempt to run the exp utility to purposely overwrite a
critical database file. For example, the following command can be run by any non- oracle OS user with execute
access to the exp utility:
$ exp heera/foo file=/oradata04/SCRKDV12/users01.dbf
the exp process runs as the oracle OS user and therefore has read and write OS privileges on any oracle -owned
data files. In this exp example, if the users01.dbf file is a live database data file, it's overwritten and rendered
worthless. this can cause catastrophic damage to your database.
to prevent such issues, with Oracle Data pump you first have to create a database object directory that maps to
a specific directory and then additionally assign read and write privileges to that directory per user. thus, Data
pump doesn't have the security problems that exist with the old exp utility.
Step 3. Taking an Export
When the directory object and grants are in place, you can use Data Pump to export information from a database. The
simple example in this section shows how to export a table. Later sections in this chapter describe in detail the various
ways in which you can export data. The point here is to work through an example that will provide a foundation for
understanding more complex topics that follow.
As a non- SYS user, create a table, and populate it with some data:
SQL> create table inv(inv_id number);
SQL> insert into inv values (123);
Next, as a non- SYS user, export the table. This example uses the previously created directory, named DP_DIR . Data
Pump uses the directory path specified by the directory object as the location on disk to which to write the dump file
and log file:
$ expdp mv_maint/foo directory=dp_dir tables=inv dumpfile=exp.dmp logfile=exp.log
The expdp utility creates a file named exp.dmp in the /oradump directory, containing the information required
to recreate the INV table and populate it with data as it was at the time the export was taken. Additionally, a log file
named exp.log is created in the /oradump directory, containing logging information associated with this export job.
If you don't specify a dump file name, Data Pump creates a file named expdat.dmp . If a file named expdat.dmp
already exists in the directory, then Data Pump throws an error. If you don't specify a log file name, then Data Pump
creates one named export.log . If a log file named export.log already exists, then Data Pump overwrites it.
 
Search WWH ::




Custom Search