Oracle® Database Utilities 10g Release 2 (10.2) Part Number B14215-01 |
|
|
View PDF |
This section describes new features of the Oracle Database 10g utilities and provides pointers to additional information. For information about features that were introduced in earlier releases of Oracle Database, refer to the documentation for those releases.
Data Pump Export and Data Pump Import
The following features have been added for Oracle Database 10g Release 2 (10.2):
The ability to perform database subsetting. This is done by using the SAMPLE
parameter on an export operation or by using the TRANSFORM=PCTSPACE
parameter on an import operation.
The ability to compress metadata before it is written to a dump file set.
See COMPRESSION.
The ability to encrypt column data on an export operation and then to access that data on an import operation.
See ENCRYPTION_PASSWORD for information about encrypting data on an export operation. See ENCRYPTION_PASSWORD for information about accessing encrypted data during an import operation.
The ability to downgrade a database through use of the VERSION
parameter.
See VERSION.
Automatic Storage Management Command-Line Utility (ASMCMD)
ASMCMD is a command-line utility that you can use to easily view and manipulate files and directories within Automatic Storage Management (ASM) disk groups. It can list the contents of disk groups, perform searches, create and remove directories and aliases, display space utilization, and more.
See Chapter 20, "ASM Command-Line Utility (ASMCMD)" for detailed information about this utility and how to use it.
Data Pump Technology
Oracle Database 10g introduces the new Oracle Data Pump technology, which enables very high-speed movement of data and metadata from one database to another. This technology is the basis for Oracle's new data movement utilities, Data Pump Export and Data Pump Import.
See Chapter 1, "Overview of Oracle Data Pump" for more information.
Data Pump Export
Data Pump Export is a utility that makes use of Oracle Data Pump technology to unload data and metadata at high speeds into a set of operating system files called a dump file set. The dump file set can be moved to another system and loaded by the Data Pump Import utility.
Although the functionality of Data Pump Export (invoked with the expdp
command) is similar to that of the original Export utility (exp
), they are completely separate utilities.
See Chapter 2, "Data Pump Export" for more information.
Data Pump Import
Data Pump Import is a utility for loading a Data Pump Export dump file set into a target system.
Although the functionality of Data Pump Import (invoked with the impdp
command) is similar to that of the original Import utility (imp
), they are completely separate utilities.
See Chapter 3, "Data Pump Import" for more information.
Data Pump API
The Data Pump API provides a high-speed mechanism to move all or part of the data and metadata from one database to another. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.
The Data Pump API is implemented through a PL/SQL package, DBMS_DATAPUMP
, that provides programmatic access to Data Pump data and metadata movement capabilities.
See Chapter 5, "The Data Pump API" for more information.
Metadata API
The following features have been added or updated for Oracle Database 10g.
You can now use remap parameters, which enable you to modify an object by changing specific old attribute values to new values. For example, when you are importing data into a database, you can use the REMAP_SCHEMA
parameter to change occurrences of schema name scott
in a dump file set to schema name blake.
All dictionary objects needed for a full export are supported.
You can request that a heterogeneous collection of objects be returned in creation order.
In addition to retrieving metadata as XML and creation DDL, you can now submit the XML to re-create the object.
See Chapter 18, "Using the Metadata API" for full descriptions of these features.
External Tables
A new access driver, ORACLE_DATAPUMP
, is now available. See Chapter 14, "The ORACLE_DATAPUMP Access Driver" for more information.
LogMiner
The LogMiner utility, previously documented in the Oracle9i Database Administrator's Guide, is now documented in this guide. The new and changed LogMiner features for Oracle Database 10g are as follows:
The new DBMS_LOGMNR
.REMOVE_LOGFILE
() procedure removes log files from the list of those being analyzed. This subprogram replaces the REMOVEFILE
option to the DBMS_LOGMNR
.ADD_LOGFILE
() procedure.
The new NO_ROWID_IN_STMT
option for DBMS_LOGMNR
.START_LOGMNR
procedure lets you filter out the ROWID
clause from reconstructed SQL_REDO
and SQL_UNDO
statements.
Supplemental logging is enhanced as follows:
At the database level, there are two new options for identification key logging:
FOREIGN
KEY
Supplementally logs all other columns of a row's foreign key if any column in the foreign key is modified.
ALL
Supplementally logs all the columns in a row (except for LOBs LONG
s, and ADT
s) if any column value is modified.
At the table level, there are these new features:
Identification key logging is now supported (PRIMARY
KEY
, FOREIGN
KEY
, UNIQUE
INDEX
, and ALL
).
The NO
LOG
option provides a way to prevent a column in a user-defined log group from being supplementally logged.
See Chapter 17, "Using LogMiner to Analyze Redo Log Files" for more information.