The Data Pump is high speed utility for transferring data or metadata from one database to another and from one operating system to another.
Data Pump uses direct-path loading and direct-path unloading technologies. It runs on the server (earlier tools similar to Data Pump : exp and imp operated on the client side of a database session). It can be used to copy data from one schema to another between two databases or within a single database. To extract a logical copy of the entire database, a list of schemas, a list of tables, or a list of tablespaces to portable operating system files. To transfer or extract the metadata (DDL statements) for a database, schema, or table.
It is invoked from the command-line programs expdp and impdp or through the
DBMS_DATAPUMP PL/SQL package.
It can also be executed by running a PL/SQL program that calls DBMS_DATAPUMP PL/SQL package.
EM Database Control provides a graphical user interface for Data Pump.
Initiate a Data Pump export by running the expdp program or by running a PL/SQL program that calls DBMS_DATAPUMP procedures.
The EXP_FULL_DATABASE role is required to export data from a schema different than yours.
Executing Data Pump Exports from the Command Line
The export program can operate in several modes:
|database mode||Entire database is exported to operating system files, including user accounts, public synonyms, roles, and profiles|
|schema mode||All data and metadata for a list of schemas is exported|
|table mode||Data and metadata for a list of tables|
|tablespace mode||A tablespace mode export extracts both data and metadata for all objects in a tablespace list as well as any object dependent on those in the specified tablespace list|
Therefore, if a table resides in your specified tablespace list, all its indexes are included whether or not they also reside in the specified tablespace list. In each of these modes, you can further specify that only data or only metadata be exported. The default is to export both data and metadata.
With some objects, such as indexes, only the metadata is exported; the actual internal structures contain physical addresses and are always rebuilt on import.
The files created by a Data Pump export are called dump files, and one or more of these files can be created during a single Data Pump export job. Multiple files are created if your Data Pump job has a parallel degree greater than one or if a single dump file exceeds the filesize parameter. All the export dump files from a single Data Pump export job are called a dump file set.
In the following sections, we will discuss how to execute the different modes of Data Pump exports.
A full database export requires the database user to have the EXP_FULL_DATABASE role. For example, to perform a full database export to files in the chap7a and chap7b directories, execute the following:
expdp system/secret full=Y
chap7b:fullb%U.dmp filesize 2G
These parameters tell the Data Pump program to connect to the database as user SYSTEM with a password of SECRET. The full=y parameter tells Data Pump to perform a database mode export. Data Pump writes a series of dump files alternating between the chap7a and chap7b directories. (These are database directory objects.) The %U in the filename tells Data Pump to insert a sequential number (00, 01, 02, 03, and so on, up to 99) into each filename. Each file is filled to 2GB in size. Data Pump spawns up to two slave processes to parallelize the load. Finally, the log file is placed in the chap7 directory (also a database directory object) and has the filename full.log.
To perform a schema mode export, omit the full=y parameter and instead include a schemas=schema_list parameter. The schema list is a comma-delimited list of schemas. Alternatively, to export your own schema, omit both the full= and the schema= parameters. Data Pump defaults to a schema mode export of your schema. To export another user’s schema, you must have the EXP_FULL_DATABASE role.
For example, to Data Pump export the HR schema, run this:
The dumpfile and logfile parameters work the same in all export modes—the database directory object is followed by a colon and then the filename. If your operating system is case sensitive, these names will be case sensitive as well.
To perform a table mode export, include a tables=table_list parameter and omit any full or schema parameters. The table_list is a comma-delimited list of tables to export. These tables may be qualified with the owner using dot notation, such as HR. EMPLOYEES.
For example, to export the metadata only for the HR-owned jobs and job_history tables, run this:
expdp hr/hr dumpfile=chap7:job_tabs.dmp
While the log is always displayed, the parameter nologfile=y tells Data Pump not to write a log file to disk. The content=metadata_only parameter tells Data Pump to export only the metadata, not the data in the tables. To specify data only (no metadata), replace the metadata_ only value with data_only.
A tablespace mode export requires that the database user have the EXP_FULL_DATABASE role. To export in tablespace mode, use the tablespaces=tablespace_list parameter to specify the tablespaces to be exported. The tablespace list is a comma-delimited list of tablespaces.
For example, to export all the objects, including their dependent objects, from the tablespace USERS, run this:
You can ask expdp for a full list of the supported parameters by specifying the parameter help=y. If your command line starts getting long, place some or all of the parameters in a file, and then direct expdp to read the file for additional parameters. Use parfile=directory:filename to tell expdp where to find the additional parameters. The directory parameter is a database directory object, and filename is the name of the parameter file.
|full=y||Specifies a database mode export.|
|schemas=schema_list||Specifies a schema mode export where schema_list is a comma-delimited list of schemas to export.|
|tables=table_list||Specifies a table mode export where table_list is a comma-delimited list of tables to export.|
|tablesspaces=tablespace_list||Specifies a tablespace mode export where tablespace_list is a comma-delimited list of tablespaces to export.|
|content=content_option||Specifies whether data, metadata, or both are exported. Valid values are DATA_ONLY (data only), METADATA_ONLY (metadata only), or the default ALL (both).|
|network_link=db_link||Specifies that a remote database accessed via the database link db_link should be used as the export source.|
|dumpfile=dir:file||Specifies the dump file location and name. dir is a database directory object. file is the filename. If the filename includes a %U substitution variable, the database will substitute a two-digit file sequence number starting with 00.|
|filesize=size_limit||Specifies the maximum size of each dump file. Can be specified in bytes, kilobytes, megabytes, or gigabytes. The default is bytes.|
|logfile=dir:file||Specifies the log file location and name. dir is a database directory object and file is the filename.|
|directory=dir||Specifies the file location to use for both the dump file and log file. dir is a database directory object.|
|nologfile=y||Specifies that no log file should be written.|
|job_name=identifier||Specifies a name for the import job. This name is visible from data dictionary views. The default is system generated.|
|parallel=degree||Specifies the maximum number of active threads/processes operating on behalf of the import. The default is 1.|
|parfile=dir:file||Specifies the file location and name of the parameter file that Data Pump import should use. dir is a database directory object. file is the filename.|
The DBMS_DATAPUMP PL/SQL package provides a programmatic interface to Data Pump. This mechanism is ideal for scheduling export jobs from the database scheduler. Setting up a Data Pump export using the DBMS_DATAPUMP PL/SQL package is a little more verbose than the standalone program, but provides greater functionality and control.
The basic flow of a PL/SQL-initiated Data Pump session is the following:
Obtain a handle to a Data Pump session.
Define the dump and log files.
Define any filtering conditions, such as a list of schemas or tables to include or exclude.
Launch the Data Pump session.
Disconnect from the session.
The following example exports a schema mode of the user HR:
h1 NUMBER; -- handle for the Data Pump session
-- Obtain a handle to an export Data Pump session h1 := dbms_datapump.open
operation => 'EXPORT' -- export not import
,job_mode => 'SCHEMA'); -- schema mode -- ,job_mode => 'FULL'); -- database
mode -- ,job_mode => 'TABLE'); -- table mode
-- ,job_mode => 'TABLESPACE'); -- tablespace mode
-- define the log file dbms_datapump.add_file(
handle => h1 -- from the OPEN call
,filename => 'hr.out' -- file name
,directory => 'CHAP7' -- database directory object
-- define the dump file dbms_datapump.add_file(
handle => h1 -- from the OPEN call
,filename => 'hr.dmp' -- file name
,directory => 'CHAP7' -- database directory object
,filetype => dbms_datapump.ku$_file_type_dump_file);
-- define the schemas to export dbms_datapump.metadata_filter(
handle => h1 -- from the OPEN call
,name => 'SCHEMA_EXPR' -- schema name filter
-- ,name => 'INCLUDE_NAME_EXPR' -- table name filter
-- ,name => 'EXCLUDE_NAME_EXPR' -- table name filter
-- ,name => 'TABLESPACE_EXPR' -- tablespace name filter
,value => 'IN(''HR'')'); -- name list
-- invoke Data Pump dbms_datapump.start_job(
handle => h1); -- from the OPEN call
-- run the job in the background dbms_datapump.detach(handle => h1);
The code in this PL/SQL example exports a schema. If you want to run an export in a different mode, change the comments on the JOB_MODE parameter in the OPEN procedure.
Use 'FULL' for a database mode export, 'TABLE' for a table mode export, or 'TABLESPACE' for a tablespace mode export. A full database export does not require the METADATA_FILTER procedure call. When you execute this PL/SQL block, the job scheduler initiates the Data Pump job. You can monitor it through the data dictionary views DBA_DATAPUMP_JOBS or USER_ DATAPUMP_JOBS or by examining the log file.
You can use the Enterprise Manager (EM) Database Control as a menu-driven interface to Data Pump export jobs. This program steps you through several options and then shows you the PL/SQL code that it will execute. Therefore, you can also use the EM Database Control to learn more about using the PL/SQL interface.
When you submit the job, it is scheduled to run from the job scheduler, so you won’t see the results immediately.
Import is the counterpart to export. Export extracts data and metadata from your database, and import loads this extracted data into the same database or into a different database, optionally transforming metadata along the way. These transformations let you, for example, copy tables from one schema to another or remap a tablespace from one database to another.
Data Pump imports, like exports, can be initiated from a standalone program, from a PL/SQL program using DBMS_DATAPUMP, or from the EM Database Control. In this section, you will learn how to use these Data Pump interfaces to import data or metadata into your database.
The Data Pump import program, impdp, has several modes of operation, including full, schema, table, and tablespace. In the full mode, the entire content of an export file set is loaded. In a schema mode import, all content for a list of schemas in the specified file set is loaded. The specified file set for a schema mode import can be from either a database or
a schema mode export. With a table mode import, only the specified table and dependent objects are loaded from the export file set. With a tablespace mode import, all objects in the export file set that were in the specified tablespace list are loaded.
With all these modes, the source can be a live database instead of a set of export files.
Export to Import Modes