Oracle® Streams Replication Administrator's Guide 10g Release 2 (10.2) Part Number B14228-02 |
|
|
View PDF |
This chapter contains instructions for performing instantiations in a Streams replication environment. Database objects must be instantiated at a destination database before changes to these objects can be replicated.
This chapter contains these topics:
If you use the DBMS_STREAMS_ADM
package to create rules for a capture process, then any objects referenced in the system-created rules are prepared for instantiation automatically. If you use the DBMS_RULE_ADM
package to create rules for a capture process, then you must prepare the database objects referenced in these rules for instantiation manually. In this case, you should prepare a database object for instantiation after a capture process has been configured to capture changes to the database object.
The following procedures in the DBMS_CAPTURE_ADM
package prepare database objects for instantiation:
PREPARE_TABLE_INSTANTIATION
prepares a single table for instantiation.
PREPARE_SCHEMA_INSTANTIATION
prepares for instantiation all of the database objects in a schema and all database objects added to the schema in the future.
PREPARE_GLOBAL_INSTANTIATION
prepares for instantiation all of the objects in a database and all objects added to the database in the future.
If you run one of these procedures while a long running transaction is modifying one or more database objects being prepared for instantiation, then the procedure will wait until the long running transaction is complete before it records the ignore SCN for the objects, which is the SCN below which changes to an object cannot be applied at destination databases. Query the V$STREAMS_TRANSACTION
dynamic performance view to monitor long running transactions being processed by a capture process or apply process.
In addition, these procedures can enable supplemental logging for any primary key, unique key, bitmap index, and foreign key columns, or for all columns, in the tables that are being prepared for instantiation. Use the supplemental_logging
parameter in each of these procedures to specify the columns for which supplemental logging is enabled.
See Also:
The following sections contain examples that prepare tables for instantiation and specify different options for supplemental logging:
Preparing the Database Objects in a Schema for Instantiation
Preparing All of the Database Objects in a Database for Instantiation
To prepare the hr.regions
table for instantiation and enable supplemental logging for any primary key, unique key, bitmap index, and foreign key columns in the table, run the following procedure:
BEGIN DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION( table_name => 'hr.regions', supplemental_logging => 'keys'); END; /
The default value for the supplemental_logging
parameter is keys
. Therefore, if this parameter is not specified, then supplemental logging is enabled for any primary key, unique key, bitmap index, and foreign key columns in the table that is being prepared for instantiation.
To prepare the database objects in the hr
schema for instantiation and enable supplemental logging for the all columns in the tables in the hr
schema, run the following procedure:
BEGIN DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name => 'hr', supplemental_logging => 'all'); END; /
After running this procedure, supplemental logging is enabled for all of the columns in the tables in the hr
schema and for all of the columns in the tables added to the hr
schema in the future.
To prepare all of the database objects in a database for instantiation, run the following procedure:
BEGIN DBMS_CAPTURE_ADM.PREPARE_GLOBAL_INSTANTIATION( supplemental_logging => 'none'); END; /
Because none
is specified for the supplemental_logging
parameter, this procedure does not enable supplemental logging for any columns. However, you can specify supplemental logging manually using an ALTER
TABLE
or ALTER
DATABASE
statement.
The following procedures in the DBMS_CAPTURE_ADM
package abort preparation for instantiation:
ABORT_TABLE_INSTANTIATION
reverses the effects of PREPARE_TABLE_INSTANTIATION
and removes any supplemental logging enabled by the PREPARE_TABLE_INSTANTIATION
procedure.
ABORT_SCHEMA_INSTANTIATION
reverses the effects of PREPARE_SCHEMA_INSTANTIATION
and removes any supplemental logging enabled by the PREPARE_SCHEMA_INSTANTIATION
and PREPARE_TABLE_INSTANTIATION
procedures.
ABORT_GLOBAL_INSTANTIATION
reverses the effects of PREPARE_GLOBAL_INSTANTIATION
and removes any supplemental logging enabled by the PREPARE_GLOBAL_INSTANTIATION
, PREPARE_SCHEMA_INSTANTIATION
, and PREPARE_TABLE_INSTANTIATION
procedures.
These procedures remove data dictionary information related to the potential instantiation of the relevant database objects.
For example, to abort the preparation for instantiation of the hr.regions
table, run the following procedure:
BEGIN DBMS_CAPTURE_ADM.ABORT_TABLE_INSTANTIATION( table_name => 'hr.regions'); END; /
You can instantiate database objects in a Streams environment in the following ways:
You can use Oracle Data Pump, transportable tablespaces, and the original Export/Import utilities to instantiate individual database objects, schemas, or an entire database. You can use RMAN to instantiate the database objects in a tablespace or tablespace set or to instantiate an entire database.
Note:
You can use the following procedures in theDBMS_STREAMS_ADM
package to configure Streams replication: MAINTAIN_GLOBAL
, MAINTAIN_SCHEMAS
, MAINTAIN_SIMPLE_TTS
, MAINTAIN_TABLES
, and MAINTAIN_TTS
. If you use one of these procedures, then instantiation is performed automatically for the appropriate database objects. Oracle recommends using one of these procedures to configure replication.See Also:
"Preparing Database Objects for Instantiation at a Source Database"
Chapter 6, "Simple Streams Replication Configuration" for information about the procedures for configuring replication
The example in this section describes the steps required to instantiate objects in a Streams environment using Oracle Data Pump export/import. This example makes the following assumptions:
You want to capture changes to all of the database objects in the hr
schema at a source database and apply these changes at a separate destination database.
The hr
schema exists at a source database but does not exist at a destination database. For the purposes of this example, you can drop the hr
user at the destination database using the following SQL statement:
DROP USER hr CASCADE;
The Data Pump import re-creates the user and the user's database objects at the destination database.
You have configured a Streams administrator at the source database and the destination database named strmadmin
. At each database, the Streams administrator is granted DBA
role.
Note:
The example in this section uses the command line Data Pump utility. You can also use theDBMS_DATAPUMP
package for Streams instantiations.See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administrator
Oracle Database Utilities for more information about Data Pump
Part V, "Sample Replication Environments" for examples that use the DBMS_DATAPUMP
package for Streams instantiations
Given these assumptions, complete the following steps to instantiate the hr
schema using Data Pump export/import:
While connected in SQL*Plus to the source database as the Streams administrator, create a directory object to hold the export dump file and export log file:
CREATE DIRECTORY DPUMP_DIR AS '/usr/dpump_dir';
While connected as the Streams administrator strmadmin
at the source database, prepare the database objects in the hr
schema for instantiation. You can complete this step in one of the following ways:
Add rules for the hr
schema to the positive rule set for a capture process using a procedure in the DBMS_STREAMS_ADM
package. If the capture process is a local capture process or a downstream capture process with a database link to the source database, then the procedure that you run prepares the objects in the hr
schema for instantiation automatically.
For example, the following procedure adds rules to the positive rule set of a capture process named strm01_capture
and prepares the hr
schema, and all of its objects, for instantiation:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strm01_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
If the specified capture process does not exist, then this procedure creates it.
Add rules for the hr
schema to the positive rule set for a capture process using a procedure in the DBMS_RULE_ADM
package, and then prepare the objects for instantiation manually by specifying the hr
schema when you run the PREPARE_SCHEMA_INSTANTIATION
procedure in the DBMS_CAPTURE_ADM
package:
BEGIN DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name => 'hr'); END; /
Make sure you add the rules to the positive rule set for the capture process before you prepare the database objects for instantiation.
While still connected to the source database as the Streams administrator, determine the current system change number (SCN) of the source database:
SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;
The SCN value returned by this query is specified for the FLASHBACK_SCN
Data Pump export parameter in Step 4. Because the hr
schema includes foreign key constraints between tables, the FLASHBACK_SCN
export parameter, or a similar export parameter, must be specified during export. In this example, assume that the query returned 876606
.
After you perform this query, make sure no DDL changes are made to the objects being exported until after the export is complete.
On a command line, use Data Pump to export the hr
schema at the source database.
Perform the export by connecting as an administrative user who is granted EXP_FULL_DATABASE
role. This user also must have READ
and WRITE
privilege on the directory object created in Step 1. This example connects as the Streams administrator strmadmin
.
The following is an example Data Pump export command:
expdp strmadmin/strmadminpw SCHEMAS=hr DIRECTORY=DPUMP_DIR DUMPFILE=hr_schema_dp.dmp FLASHBACK_SCN=876606
See Also:
Oracle Database Utilities for information about performing a Data Pump exportWhile connected in SQL*Plus to the destination database the Streams administrator, create a directory object to hold the import dump file and import log file:
CREATE DIRECTORY DPUMP_DIR AS '/usr/dpump_dir';
Transfer the Data Pump export dump file hr_schema_dp.dmp
to the destination database. You can use the DBMS_FILE_TRANSFER
package, binary FTP, or some other method to transfer the file to the destination database. After the file transfer, the export dump file should reside in the directory that corresponds to the directory object created in Step 5.
On a command line at the destination database, use Data Pump to import the export dump file hr_schema_dp.dmp
. Make sure no changes are made to the tables in the schema being imported at the destination database until the import is complete. Performing the import automatically sets the instantiation SCN for the hr
schema and all of its objects at the destination database.
Perform the import by connecting as an administrative user who is granted IMP_FULL_DATABASE
role. This user also must have READ
and WRITE
privilege on the directory object created in Step 5. This example connects as the Streams administrator strmadmin
.
The following is an example import command:
impdp strmadmin/strmadminpw SCHEMAS=hr DIRECTORY=DPUMP_DIR DUMPFILE=hr_schema_dp.dmp
Note:
Any table supplemental log groups for the tables exported from the export database are retained when the tables are imported at the import database. You can drop these supplemental log groups if necessary.See Also:
Oracle Database Utilities for information about performing a Data Pump importThe examples in this section describe the steps required to instantiate the database objects in a tablespace using transportable tablespace or RMAN. These instantiation options usually are faster than export/import. The following examples instantiate the database objects in a tablespace:
"Instantiating Objects Using Transportable Tablespace" uses the transportable tablespace feature to complete the instantiation. Data Pump exports the tablespace at the source database, and imports the tablespace at the destination database. The tablespace is read-only during the export.
"Instantiating Objects Using Transportable Tablespace from Backup with RMAN" uses the RMAN TRANSPORT
TABLESPACE
command to generate a Data Pump export dump file and datafiles for a tablespace or set of tablespaces at the source database while the tablespace or tablespaces remain online. Either Data Pump import or the ATTACH_TABLESPACES
procedure in the DBMS_STREAMS_TABLESPACE_ADM
package can add the tablespace or tablespaces to the destination database.
These examples instantiate a tablespace set that includes a tablespace called jobs_tbs
, and a tablespace called regions_tbs
. To run the examples, connect to the source database as an administrative user and create the new tablespaces:
CREATE TABLESPACE jobs_tbs DATAFILE '/usr/oracle/dbs/jobs_tbs.dbf' SIZE 5 M; CREATE TABLESPACE regions_tbs DATAFILE '/usr/oracle/dbs/regions_tbs.dbf' SIZE 5 M;
Place the new table hr.jobs_transport
in the jobs_tbs
tablespace:
CREATE TABLE hr.jobs_transport TABLESPACE jobs_tbs AS SELECT * FROM hr.jobs;
Place the new table hr.regions_transport
in the regions_tbs
tablespace:
CREATE TABLE hr.regions_transport TABLESPACE regions_tbs AS SELECT * FROM hr.regions;
Both of the examples make the following assumptions:
You want to capture all of the changes to the hr.jobs_transport
and hr.regions_transport
tables at a source database and apply these changes at a separate destination database.
The hr.jobs_transport
exists at a source database, and a single self-contained tablespace named jobs_tbs
contains the table. The jobs_tbs
tablespace is stored in a single datafile named jobs_tbs.dbf
.
The hr.regions_transport
exists at a source database, and a single self-contained tablespace named regions_tbs
contains the table. The regions_tbs
tablespace is stored in a single datafile named regions_tbs.dbf
.
The jobs_tbs
and regions_tbs
tablespaces do not contain data from any other schemas.
The hr.jobs_transport
table, the hr.regions_transport
table, the jobs_tbs
tablespace, and the regions_tbs
tablespace do not exist at the destination database.
You have configured a Streams administrator at both the source database and the destination database named strmadmin
, and you have granted this Streams administrator DBA
role at both databases.
See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administratorThis example uses transportable tablespace to instantiate the database objects in a tablespace set. In addition to the assumptions listed in "Instantiating Objects in a Tablespace Using Transportable Tablespace or RMAN", this example makes the following assumptions:
The Streams administrator at the source database is granted the EXP_FULL_DATABASE
role to perform the transportable tablespaces export. The DBA
role is sufficient because it includes the EXP_FULL_DATABASE
role. In this example, the Streams administrator performs the transportable tablespaces export.
The Streams administrator at the destination database is granted the IMP_FULL_DATABASE
role to perform the transportable tablespaces import. The DBA
role is sufficient because it includes the IMP_FULL_DATABASE
role. In this example, the Streams administrator performs the transportable tablespaces export.
See Also:
Oracle Database Administrator's Guide for more information about using transportable tablespaces and for information about limitations that might applyComplete the following steps to instantiate the database objects in the jobs_tbs
tablespace using transportable tablespace:
While connected in SQL*Plus to the source database as the Streams administrator strmadmin
, create a directory object to hold the export dump file and export log file:
CREATE DIRECTORY TRANS_DIR AS '/usr/trans_dir';
While connected as the Streams administrator strmadmin
at the source database, prepare the hr.jobs_transport
and hr.regions_transport
tables for instantiation. You can complete this step in one of the following ways:
Add rules for the hr.jobs_transport
and hr.regions_transport
tables to the positive rule set for a capture process using a procedure in the DBMS_STREAMS_ADM
package. If the capture process is a local capture process or a downstream capture process with a database link to the source database, then the procedure that you run prepares this table for instantiation automatically.
For example, the following procedure adds rules to the positive rule set of a capture process named strm01_capture
and prepares the hr.jobs_transport
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.jobs_transport', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strmadmin.strm01_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
The following procedure adds rules to the positive rule set of a capture process named strm01_capture
and prepares the hr.regions_transport
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.regions_transport', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strmadmin.strm01_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
Add rules for the hr.jobs_transport
and hr.regions_transport
tables to the positive rule set for a capture process using a procedure in the DBMS_RULE_ADM
package, and then prepare these tables for instantiation manually by specifying the table when you run the PREPARE_TABLE_INSTANTIATION
procedure in the DBMS_CAPTURE_ADM
package:
BEGIN DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION( table_name => 'hr.jobs_transport'); END; / BEGIN DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION( table_name => 'hr.regions_transport'); END; /
Make sure you add the rules to the positive rule set for the capture process before you prepare the database objects for instantiation.
While connected the Streams administrator at the source database, make the tablespaces that contain the objects you are instantiating read-only. In this example, the jobs_tbs
and regions_tbs
tablespaces contain the database objects.
ALTER TABLESPACE jobs_tbs READ ONLY; ALTER TABLESPACE regions_tbs READ ONLY;
On a command line, use the Data Pump Export utility to export the jobs_tbs
and regions_tbs
tablespaces at the source database using transportable tablespaces export parameters. The following is an example export command that uses transportable tablespaces export parameters:
expdp strmadmin/strmadminpw TRANSPORT_TABLESPACES=jobs_tbs, regions_tbs DIRECTORY=TRANS_DIR DUMPFILE=tbs_ts.dmp
When you run the export command, make sure you connect as an administrative user who was granted EXP_FULL_DATABASE
role and has READ
and WRITE
privileges on the directory object.
You can also perform an instantiation using transportable tablespaces and the original Export/Import utilities.
See Also:
Oracle Database Utilities for information about performing an exportWhile connected to the destination database as the Streams administrator strmadmin
, create a directory object to hold the import dump file and import log file:
CREATE DIRECTORY TRANS_DIR AS '/usr/trans_dir';
Transfer the datafiles for the tablespaces and the export dump file tbs_ts.dmp
to the destination database. You can use the DBMS_FILE_TRANSFER
package, binary FTP, or some other method to transfer these files to the destination database. After the file transfer, the export dump file should reside in the directory that corresponds to the directory object created in Step 5.
On a command line at the destination database, use the Data Pump Import utility to import the export dump file tbs_ts.dmp
using transportable tablespaces import parameters. Performing the import automatically sets the instantiation SCN for the hr.jobs_transport
and hr.regions_transport
tables at the destination database.
The following is an example import command:
impdp strmadmin/strmadminpw DIRECTORY=TRANS_DIR DUMPFILE=tbs_ts.dmp TRANSPORT_DATAFILES=/usr/orc/dbs/jobs_tbs.dbf,/usr/orc/dbs/regions_tbs.dbf
When you run the import command, make sure you connect as an administrative user who was granted IMP_FULL_DATABASE
role and has READ
and WRITE
privileges on the directory object.
See Also:
Oracle Database Utilities for information about performing an importIf necessary, at both the source database and the destination database, connect as the Streams administrator and put the tablespaces into read/write mode:
ALTER TABLESPACE jobs_tbs READ WRITE; ALTER TABLESPACE regions_tbs READ WRITE;
Note:
Any table supplemental log groups for the tables exported from the export database are retained when tables are imported at the import database. You can drop these supplemental log groups if necessary.The RMAN TRANSPORT
TABLESPACE
command uses Data Pump and an RMAN-managed auxiliary instance to export the database objects in a tablespace or tablespace set while the tablespace or tablespace set remains online in the source database. The RMAN TRANSPORT
TABLESPACE
command produces a Data Pump export dump file and datafiles, and these files can be used to perform a Data Pump import of the tablespace or tablespaces at the destination database. The ATTACH_TABLESPACES
procedure in the DBMS_STREAMS_TABLESPACE_ADM
package can also be used to attach the tablespace or tablespaces at the destination database.
In addition to the assumptions listed in "Instantiating Objects in a Tablespace Using Transportable Tablespace or RMAN", this example makes the following assumptions:
The source database is tts1.net
.
The destination database is tts2.net
.
See Also:
Oracle Database Backup and Recovery Advanced User's Guide for instructions on using the RMANTRANSPORT
TABLESPACE
commandComplete the following steps to instantiate the database objects in the jobs_tbs
and regions_tbs
tablespaces using transportable tablespaces and RMAN:
Create a backup of the source database that includes the tablespaces being instantiated, if a backup does not exist. RMAN requires a valid backup for tablespace cloning. In this example, create a backup of the source database that includes the jobs_tbs
and regions_tbs
tablespaces if one does not exist.
Optionally, connect in SQL*Plus to the source database as the Streams administrator strmadmin
, and create a directory object to hold the export dump file and export log file:
CONNECT strmadmin/strmadminpw@tts1.net CREATE DIRECTORY SOURCE_DIR AS '/usr/db_files';
This step is optional because the RMAN TRANSPORT
TABLESPACE
command creates a directory object named STREAMS_DIROBJ_DPDIR
on the auxiliary instance if the DATAPUMP
DIRECTORY
parameter is omitted when you run this command in Step 6.
While connected as the Streams administrator strmadmin
at the source database tts1.net
, prepare the hr.jobs_transport
and hr.regions_transport
tables for instantiation. You can complete this step in one of the following ways:
Add rules for the hr.jobs_transport
and hr.regions_transport
tables to the positive rule set for a capture process using a procedure in the DBMS_STREAMS_ADM
package. If the capture process is a local capture process or a downstream capture process with a database link to the source database, then the procedure that you run prepares this table for instantiation automatically.
For example, the following procedure adds rules to the positive rule set of a capture process named strm01_capture
and prepares the hr.jobs_transport
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.jobs_transport', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strmadmin.strm01_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
The following procedure adds rules to the positive rule set of a capture process named strm01_capture
and prepares the hr.regions_transport
table:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.regions_transport', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strmadmin.strm01_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
Add rules for the hr.jobs_transport
and hr.regions_transport
tables to the positive rule set for a capture process using a procedure in the DBMS_RULE_ADM
package, and then prepare these tables for instantiation manually by specifying the table when you run the PREPARE_TABLE_INSTANTIATION
procedure in the DBMS_CAPTURE_ADM
package:
BEGIN DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION( table_name => 'hr.jobs_transport'); END; / BEGIN DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION( table_name => 'hr.regions_transport'); END; /
Make sure you add the rules to the positive rule set for the capture process before you prepare the database objects for instantiation.
Determine the until SCN for the RMAN TRANSPORT
TABLESPACE
command:
SET SERVEROUTPUT ON SIZE 1000000 DECLARE until_scn NUMBER; BEGIN until_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER; DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn); END; /
Make a note of the until SCN returned. You will use this number in Step 6. For this example, assume that the returned until SCN is 7661956
.
Optionally, you can skip this step. In this case, do not specify the until clause in the RMAN TRANSPORT
TABLESPACE
command in Step 6. When no until clause is specified, RMAN uses the last archived redo log file to determine the until SCN automatically.
Connect to the source database as a system administrator in SQL*Plus and archive the current online redo log:
ALTER SYSTEM ARCHIVE LOG CURRENT;
At the source database tts1.net
, use the RMAN TRANSPORT
TABLESPACE
command to generate the dump file for the tablespace set:
RMAN> CONNECT TARGET SYS/change_on_install@tts1.net RMAN> RUN { TRANSPORT TABLESPACE 'jobs_tbs', 'regions_tbs' UNTIL SCN 7661956 AUXILIARY DESTINATION '/usr/aux_files' DATAPUMP DIRECTORY SOURCE_DIR DUMP FILE 'jobs_regions_tbs.dmp' EXPORT LOG 'jobs_regions_tbs.log' IMPORT SCRIPT 'jobs_regions_tbs_imp.sql' TABLESPACE DESTINATION '/orc/dbs'; }
The TRANSPORT
TABLESPACE
command places the files in the following directories on the computer system that runs the source database:
The directory that corresponds to the SOURCE_DIR
directory object (/usr/db_files
) contains the export dump file and export log file.
The /orc/dbs
directory contains the generated datafiles for the tablespaces and the import script. You use this script to complete the instantiation by attaching the tablespace at the destination database.
Modify the import script, if necessary. You might need to modify one or both of the following items in the script:
You might want to change the method used to make the exported tablespaces part of the destination database. The import script includes two ways to make the exported tablespaces part of the destination database: a Data Pump import command (impdp
), and a script for attaching the tablespaces using the ATTACH_TABLESPACES
procedure in the DBMS_STREAMS_TABLESPACE_ADM
package.
The default script uses the attach tablespaces method. The Data Pump import command is commented out. If you want to use Data Pump import, then remove the comment symbols (/*
and */
) surrounding the impdp command, and either surround the attach tablespaces script with comments or remove the attach tablespaces script. The attach tablespaces script starts with SET
SERVEROUTPUT
ON
and continues to the end of the file.
You might need to change the directory paths specified in the script. In Step 8, you will transfer the import script (jobs_regions_tbs_imp.sql
), the Data Pump export dump file (jobs_regions_tbs.dmp
), and the generated datafile for each tablespace (jobs_tbs.dbf
and regions_tbs.dbf
) to one or more directories on the computer system running the destination database. Make sure the directory paths specified in the script are the correct directory paths.
Transfer the import script (jobs_regions_tbs_imp.sql
), the Data Pump export dump file (jobs_regions_tbs.dmp
), and the generated datafile for each tablespace (jobs_tbs.dbf
and regions_tbs.dbf
) to the destination database. You can use the DBMS_FILE_TRANSFER
package, binary FTP, or some other method to transfer the file to the destination database. After the file transfer, these files should reside in the directories specified in the import script.
At the destination database, connect as the Streams administrator in SQL*Plus and run the import script:
CONNECT strmadmin/strmadminpw@tts2.net SET ECHO ON SPOOL jobs_tbs_imp.out @jobs_tbs_imp.sql
When the script completes, check the jobs_tbs_imp.out
spool file to ensure that all actions finished successfully.
The example in this section describes the steps required to instantiate objects in a Streams environment using original export/import. This example makes the following assumptions:
You want to capture changes to all of the tables in the hr
schema at a source database and apply these changes at a separate destination database.
The hr
schema exists at both the source database and the destination database. The hr
schema at the source database contains seven tables. The hr
schema at the destination database does not contain any tables. For the purposes of this example, you can drop the tables in the hr
schema at the destination database using the following SQL statements:
DROP TABLE hr.countries CASCADE CONSTRAINTS; DROP TABLE hr.departments CASCADE CONSTRAINTS; DROP TABLE hr.employees CASCADE CONSTRAINTS; DROP TABLE hr.job_history CASCADE CONSTRAINTS; DROP TABLE hr.jobs CASCADE CONSTRAINTS; DROP TABLE hr.locations CASCADE CONSTRAINTS; DROP TABLE hr.regions CASCADE CONSTRAINTS;
The import re-creates these tables at the destination database.
You have configured a Streams administrator at the source database named strmadmin
.
See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administratorGiven these assumptions, complete the following steps to instantiate the hr
schema using original export/import:
While connected in SQL*Plus as the Streams administrator strmadmin
at the source database, prepare the database objects in the hr
schema for instantiation. You can complete this step in one of the following ways:
Add rules for the hr
schema to the positive rule set for a capture process using a procedure in the DBMS_STREAMS_ADM
package. If the capture process is a local capture process or a downstream capture process with a database link to the source database, then the procedure that you run prepares the objects in the hr
schema for instantiation automatically.
For example, the following procedure adds rules to the positive rule set of a capture process named strm01_capture
and prepares the hr
schema, and all of its objects, for instantiation:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strm01_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
Add rules for the hr
schema to the positive rule set for a capture process using a procedure in the DBMS_RULE_ADM
package, and then prepare the objects for instantiation manually by specifying the hr
schema when you run the PREPARE_SCHEMA_INSTANTIATION
procedure in the DBMS_CAPTURE_ADM
package:
BEGIN DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name => 'hr'); END; /
Make sure you add the rules to the positive rule set for the capture process before you prepare the database objects for instantiation.
On the command line, use the original Export utility to export the tables in the hr
schema at the source database. Make sure no DDL changes are made to the tables during the export.
The following is an example export command:
exp hr/hr FILE=hr_schema.dmp CONSISTENT=y TABLES=countries,departments,employees,jobs,job_history,locations,regions
Because the hr
schema includes foreign key constraints between tables, the CONSISTENT
export parameter is set to y
to ensure consistency between all of the objects in the schema. The OBJECT_CONISTENT
export parameter is not used because the CONSISTENT
export parameter provides a more stringent level of consistency.
See Also:
Oracle Database Utilities for information about performing an export using the original Export utilityTransfer the export dump file hr_schema.dmp
to the destination database. You can use the DBMS_FILE_TRANSFER
package, binary FTP, or some other method to transfer the to the destination database.
At the destination database, use the original Import utility to import the export dump file hr_schema.dmp
. When you run the import command, make sure you set the STREAMS_INSTANTIATION
import parameter to y
. This parameter ensures that the import records instantiation SCN information for each object imported. Also, make sure no changes are made to the tables in the schema being imported at the destination database until the import is complete. Performing the import automatically sets the instantiation SCN for each table in the hr
schema at the destination database.
The following is an example import command:
imp hr/hr FILE=hr_schema.dmp FULL=y COMMIT=y STREAMS_INSTANTIATION=y LOG=import.log
Note:
Any table supplemental log groups for the tables exported from the export database are retained when the tables are imported at the import database. You can drop these supplemental log groups if necessary.See Also:
Oracle Database Utilities for information about performing an import using the original Import utilityThe examples in this section describe the steps required to instantiate an entire database using the Recovery Manager (RMAN) DUPLICATE
command or CONVERT
DATABASE
command. When you use one of these RMAN commands for full database instantiation, you perform the following general steps:
Copy the entire source database to the destination site using the RMAN command.
Remove the Streams configuration at the destination site using the REMOVE_STREAMS_CONFIGURATION
procedure in the DBMS_STREAMS_ADM
package.
Configure Streams destination site, including configuration of one or more apply processes to apply changes from the source database.
You can complete this process without stopping any running capture processes or propagations at the source database.
The RMAN DUPLICATE
command can be used for instantiation when the source and destination databases are running on the same platform. The RMAN CONVERT
DATABASE
command can be used for instantiation when the source and destination databases are running on different platforms. Follow the instructions in one of these sections:
Instantiating an Entire Database on the Same Platform Using RMAN
Instantiating an Entire Database on Different Platforms Using RMAN
Note:
If you want to configure a Streams replication environment that replicates all of the supported changes for an entire database, then the PRE_INSTANTIATION_SETUP
and POST_INSTANTIATION_SETUP
procedures in the DBMS_STREAMS_ADM
package can be used. See "Configuring Database Replication Using the DBMS_STREAMS_ADM Package" for instructions.
Oracle recommends that you do not use RMAN for instantiation in an environment where distributed transactions are possible. Doing so can cause in-doubt transactions that must be corrected manually. Use export/import or transportable tablespaces for instantiation instead.
See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administratorThe example in this section instantiates an entire database using the RMAN DUPLICATE
command. The example makes the following assumptions:
You want to capture all of the changes made to a source database named dpx1.net
, propagate these changes to a separate destination database named dpx2.net
, and apply these changes at the destination database.
You have configured a Streams administrator at the source database named strmadmin
.
The dpx1.net
and dpx2.net
databases run on the same platform.
See Also:
Oracle Database Backup and Recovery Advanced User's Guide for instructions on using the RMANDUPLICATE
commandComplete the following steps to instantiate an entire database using RMAN when the source and destination databases run on the same platform:
Create a backup of the source database if one does not exist. RMAN requires a valid backup for duplication. In this example, create a backup of dpx1.net
if one does not exist.
While connected in SQL*Plus as the Streams administrator strmadmin
at the source database, create an ANYDATA
queue to stage the changes from the source database if such a queue does not already exist. This queue will stage changes that will be propagated to the destination database after it has been configured.
For example, the following procedure creates a queue named streams_queue
:
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
Remain connected as the Streams administrator in SQL*Plus at the source database through Step 8.
Create a database link from dpx1.net
to dpx2.net
:
CREATE DATABASE LINK dpx2.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'dpx2.net';
Create a propagation from the source queue at the source database to the destination queue at the destination database. The destination queue at the destination database does not exist yet, but creating this propagation ensures that LCRs enqueued into the source queue will remain staged there until propagation is possible. In addition to captured LCRs, the source queue will stage internal messages that will populate the Streams data dictionary at the destination database.
The following procedure creates the dpx1_to_dpx2
propagation:
BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES( streams_name => 'dpx1_to_dpx2', source_queue_name => 'strmadmin.streams_queue', destination_queue_name => 'strmadmin.streams_queue@dpx2.net', include_dml => true, include_ddl => true, source_database => 'dpx1.net', inclusion_rule => true, queue_to_queue => true); END; /
Stop the propagation you created in Step 4.
BEGIN DBMS_PROPAGATION_ADM.STOP_PROPAGATION( propagation_name => 'dpx1_to_dpx2'); END; /
Prepare the entire source database for instantiation, if it has not been prepared for instantiation previously. If there is no capture process that captures all of the changes to the source database, then create this capture process using the ADD_GLOBAL_RULES
procedure in the DBMS_STREAMS_ADM
package. If the capture process is a local capture process or a downstream capture process with a database link to the source database, then running this procedure automatically prepares the entire source database for instantiation. If such a capture process already exists, then make sure the source database has been prepared for instantiation by querying the DBA_CAPTURE_PREPARED_DATABASE
data dictionary view.
If you need to create a capture process, then this example creates the capture_db
capture process if it does not already exist:
BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => 'capture', streams_name => 'capture_db', queue_name => 'strmadmin.streams_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
If the capture process already exists and you need to prepare the entire database for instantiation, then run the following procedure:
EXEC DBMS_CAPTURE_ADM.PREPARE_GLOBAL_INSTANTIATION();
If you created a capture process in Step 6, then start the capture process:
BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'capture_db'); END; /
Determine the until SCN for the RMAN DUPLICATE
command:
SET SERVEROUTPUT ON SIZE 1000000 DECLARE until_scn NUMBER; BEGIN until_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER; DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn); END; /
Make a note of the until SCN returned. You will use this number in Step 11. For this example, assume that the returned until SCN is 3050191
.
Connect to the source database as a system administrator in SQL*Plus and archive the current online redo log:
ALTER SYSTEM ARCHIVE LOG CURRENT;
Prepare your environment for database duplication, which includes preparing the destination database as an auxiliary instance for duplication. See Oracle Database Backup and Recovery Advanced User's Guide for instructions.
Use the RMAN DUPLICATE
command with the OPEN
RESTRICTED
option to instantiate the source database at the destination database. The OPEN
RESTRICTED
option is required. This option enables a restricted session in the duplicate database by issuing the following SQL statement: ALTER
SYSTEM
ENABLE
RESTRICTED
SESSION
. RMAN issues this statement immediately before the duplicate database is opened.
You can use the UNTIL
SCN
clause to specify an SCN for the duplication. Use the until SCN determined in Step 8 for this clause. The until SCN specified for the RMAN DUPLICATE
command must be higher than the SCN when the database was prepared for instantiation in Step 6. Also, archived redo logs must be available for the until SCN specified and for higher SCN values. Therefore, Step 9 archived the redo log containing the until SCN.
Make sure you use TO
database_name
in the DUPLICATE
command to specify the name of the duplicate database. In this example, the duplicate database name is dpx2
. Therefore, the DUPLICATE
command for this example includes TO
dpx2
.
The following is an example of an RMAN DUPLICATE
command:
rman RMAN> CONNECT TARGET SYS/change_on_install@dpx1.net RMAN> CONNECT AUXILIARY SYS/change_on_install@dpx2.net RMAN> RUN { SET UNTIL SCN 3050191; ALLOCATE AUXILIARY CHANNEL dpx2 DEVICE TYPE sbt; DUPLICATE TARGET DATABASE TO dpx2 NOFILENAMECHECK OPEN RESTRICTED; }
At the destination database, connect as an administrative user in SQL*Plus and rename the database global name. After the RMAN DUPLICATE
command, the destination database has the same global name as the source database.
ALTER DATABASE RENAME GLOBAL_NAME TO DPX2.NET;
At the destination database, connect as an administrator with SYSDBA
privilege in SQL*Plus and run the following procedure:
Attention:
Make sure you are connected to the destination database, not the source database, when you run this procedure because it removes the local Streams configuration.EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
Note:
Any supplemental log groups for the tables at the source database are retained at the destination database, and theREMOVE_STREAMS_CONFIGURATION
procedure does not drop them. You can drop these supplemental log groups if necessary.See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about theREMOVE_STREAMS_CONFIGURATION
procedureAt the destination database, use the ALTER
SYSTEM
statement to disable the RESTRICTED
SESSION
:
ALTER SYSTEM DISABLE RESTRICTED SESSION;
At the destination database, create the queue specified in Step 4.
For example, the following procedure creates a queue named streams_queue
:
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
At the destination database, connect as the Streams administrator and configure the Streams environment.
Attention:
Do not start any apply processes at the destination database until after you set the global instantiation SCN in Step 18.See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administratorAt the destination database, create a database link from the destination database to the source database:
CREATE DATABASE LINK dpx1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'dpx1.net';
This database link is required because the next step runs the SET_GLOBAL_INSTANTIATION_SCN
procedure with the recursive parameter set to true
.
At the destination database, set the global instantiation SCN for the source database. The RMAN DUPLICATE
command duplicates the database up to one less than the SCN value specified in the UNTIL
SCN
clause. Therefore, you should subtract one from the until SCN value that you specified when you ran the DUPLICATE
command in Step 11. In this example, the until SCN was set to 3050191
. Therefore, the instantiation SCN should be set to 3050191
-
1
, or 3050190
.
For example, to set the global instantiation SCN to 3050190
for the dpx1.net
source database, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN( source_database_name => 'dpx1.net', instantiation_scn => 3050190, recursive => true); END; /
Notice that the recursive
parameter is set to true
to set the instantiation SCN for all schemas and tables in the destination database.
At the destination database, you can start any apply processes that you configured.
At the source database, start the propagation you stopped in Step 5:
BEGIN DBMS_PROPAGATION_ADM.START_PROPAGATION( queue_name => 'dpx1_to_dpx2'); END; /
The example in this section instantiates an entire database using the RMAN CONVERT
DATABASE
command. The example makes the following assumptions:
You want to capture all of the changes made to a source database named cvx1.net
, propagate these changes to a separate destination database named cvx2.net
, and apply these changes at the destination database.
You have configured a Streams administrator at the source database named strmadmin
.
The cvx1.net
and cvx2.net
databases run on different platforms, and the platform combination is supported by the RMAN CONVERT
DATABASE
command. You can use the DBMS_TDB
package to determine whether a platform combination is supported.
The RMAN CONVERT
DATABASE
command produces converted datafiles, an initialization parameter file (PFILE), and a SQL script. The converted datafiles and PFILE are for use with the destination database, and the SQL script creates the destination database on the destination platform.
See Also:
Oracle Database Backup and Recovery Advanced User's Guide for instructions on using the RMANCONVERT
DATABASE
commandComplete the following steps to instantiate an entire database using RMAN when the source and destination databases run on different platforms:
Create a backup of the source database if one does not exist. RMAN requires a valid backup. In this example, create a backup of cvx1.net
if one does not exist.
While connected in SQL*Plus as the Streams administrator strmadmin
at the source database, create an ANYDATA
queue to stage the changes from the source database if such a queue does not already exist. This queue will stage changes that will be propagated to the destination database after it has been configured.
For example, the following procedure creates a queue named streams_queue
:
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
Remain connected as the Streams administrator in SQL*Plus at the source database through Step 7.
Create a database link from cvx1.net
to cvx2.net
:
CREATE DATABASE LINK cvx2.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'cvx2.net';
Create a propagation from the source queue at the source database to the destination queue at the destination database. The destination queue at the destination database does not exist yet, but creating this propagation ensures that LCRs enqueued into the source queue will remain staged there until propagation is possible. In addition to captured LCRs, the source queue will stage internal messages that will populate the Streams data dictionary at the destination database.
The following procedure creates the cvx1_to_cvx2
propagation:
BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES( streams_name => 'cvx1_to_cvx2', source_queue_name => 'strmadmin.streams_queue', destination_queue_name => 'strmadmin.streams_queue@cvx2.net', include_dml => true, include_ddl => true, source_database => 'cvx1.net', inclusion_rule => true, queue_to_queue => true); END; /
Stop the propagation you created in Step 4.
BEGIN DBMS_PROPAGATION_ADM.STOP_PROPAGATION( propagation_name => 'cvx1_to_cvx2'); END; /
Prepare the entire source database for instantiation, if it has not been prepared for instantiation previously. If there is no capture process that captures all of the changes to the source database, then create this capture process using the ADD_GLOBAL_RULES
procedure in the DBMS_STREAMS_ADM
package. If the capture process is a local capture process or a downstream capture process with a database link to the source database, then running this procedure automatically prepares the entire source database for instantiation. If such a capture process already exists, then make sure the source database has been prepared for instantiation by querying the DBA_CAPTURE_PREPARED_DATABASE
data dictionary view.
If you need to create a capture process, then this example creates the capture_db
capture process if it does not already exist:
BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => 'capture', streams_name => 'capture_db', queue_name => 'strmadmin.streams_queue', include_dml => true, include_ddl => true, inclusion_rule => true); END; /
If the capture process already exists, and you need to prepare the entire database for instantiation, then run the following procedure:
EXEC DBMS_CAPTURE_ADM.PREPARE_GLOBAL_INSTANTIATION();
If you created a capture process in Step 6, then start the capture process:
BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'capture_db'); END; /
Connect to the source database as a system administrator in SQL*Plus and archive the current online redo log:
ALTER SYSTEM ARCHIVE LOG CURRENT;
Prepare your environment for database conversion, which includes opening the source database in read-only mode. Complete the following steps:
If the source database is open, then shut it down and start it in read-only mode.
Run the CHECK_DB
and CHECK_EXTERNAL
functions in the DBMS_TDB
package. Check the results to ensure that the conversion is supported by the RMAN CONVERT
DATABASE
command.
See Also:
Oracle Database Backup and Recovery Advanced User's Guide for more information about these stepsDetermine the current SCN of the source database:
SET SERVEROUTPUT ON SIZE 1000000 DECLARE current_scn NUMBER; BEGIN current_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER; DBMS_OUTPUT.PUT_LINE('Current SCN: ' || current_scn); END; /
Make a note of the SCN value returned. You will use this number in Step 20. For this example, assume that the returned value is 46931285
.
Open RMAN and run the CONVERT
DATABASE
command.
Make sure you use NEW
DATABASE
database_name
in the CONVERT
DATABASE
command to specify the name of the destination database. In this example, the destination database name is cvx2
. Therefore, the CONVERT
DATABASE
command for this example includes NEW
DATABASE
cvx2
.
The following is an example of an RMAN CONVERT
DATABASE
command for a destination database that is running on the Linux
IA
(64-bit)
platform:
rman RMAN> CONNECT TARGET SYS/change_on_install@cvx1.net CONVERT DATABASE NEW DATABASE 'cvx2' TRANSPORT SCRIPT '/tmp/convertdb/transportscript.sql' TO PLATFORM 'Linux IA (64-bit)' DB_FILE_NAME_CONVERT '/home/oracle/dbs','/tmp/convertdb';
Transfer the datafiles, PFILE, and SQL script produced by the RMAN CONVERT
DATABASE
command to the computer system that will run the destination database.
On the computer system that will run the destination database, modify the SQL script so that the destination database always opens with restricted session enabled.
The following is an example script with the necessary modifications in bold font:
-- The following commands will create a new control file and use it -- to open the database. -- Data used by Recovery Manager will be lost. -- The contents of online logs will be lost and all backups will -- be invalidated. Use this only if online logs are damaged. -- After mounting the created controlfile, the following SQL -- statement will place the database in the appropriate -- protection mode: -- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE STARTUP NOMOUNT PFILE='init_00gd2lak_1_0.ora' CREATE CONTROLFILE REUSE SET DATABASE "CVX2" RESETLOGS NOARCHIVELOG MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXDATAFILES 32 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 '/tmp/convertdb/archlog1' SIZE 25M, GROUP 2 '/tmp/convertdb/archlog2' SIZE 25M DATAFILE '/tmp/convertdb/systemdf', '/tmp/convertdb/sysauxdf', '/tmp/convertdb/datafile1', '/tmp/convertdb/datafile2', '/tmp/convertdb/datafile3' CHARACTER SET WE8DEC ; -- NOTE: This ALTER SYSTEM statement is added to enable restricted session. ALTER SYSTEM ENABLE RESTRICTED SESSION; -- Database can now be opened zeroing the online logs. ALTER DATABASE OPEN RESETLOGS; -- No tempfile entries found to add. -- set echo off prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ prompt * Your database has been created successfully! prompt * There are many things to think about for the new database. Here prompt * is a checklist to help you stay on track: prompt * 1. You may want to redefine the location of the directory objects. prompt * 2. You may want to change the internal database identifier (DBID) prompt * or the global database name for this database. Use the prompt * NEWDBID Utility (nid). prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SHUTDOWN IMMEDIATE -- NOTE: This startup has the UPGRADE parameter. -- It already has restricted session enabled, so no change is needed. STARTUP UPGRADE PFILE='init_00gd2lak_1_0.ora' @@ ?/rdbms/admin/utlirp.sql SHUTDOWN IMMEDIATE -- NOTE: The startup below is generated without the RESTRICT clause. -- Add the RESTRICT clause. STARTUP RESTRICT PFILE='init_00gd2lak_1_0.ora' -- The following step will recompile all PL/SQL modules. -- It may take serveral hours to complete. @@ ?/rdbms/admin/utlrp.sql set feedback 6;
Other changes to the script might be necessary. For example, the datafile locations and PFILE location might need to be changed to point to the correct locations on the destination database computer system.
At the destination database, connect as an administrator with SYSDBA
privilege in SQL*Plus and run the following procedure:
Attention:
Make sure you are connected to the destination database, not the source database, when you run this procedure because it removes the local Streams configuration.EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
Note:
Any supplemental log groups for the tables at the source database are retained at the destination database, and theREMOVE_STREAMS_CONFIGURATION
procedure does not drop them. You can drop these supplemental log groups if necessary.See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about theREMOVE_STREAMS_CONFIGURATION
procedureConnect to the destination database as the Streams administrator, and drop the database link from the source database to the destination database that was cloned from the source database:
CONNECT strmadmin/strmadminpw@cvx2.net DROP DATABASE LINK cvx2.net;
At the destination database, use the ALTER
SYSTEM
statement to disable the RESTRICTED
SESSION
:
ALTER SYSTEM DISABLE RESTRICTED SESSION;
At the destination database, create the queue specified in Step 4.
For example, the following procedure creates a queue named streams_queue
:
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
At the destination database, connect as the Streams administrator and configure the Streams environment.
Attention:
Do not start any apply processes at the destination database until after you set the global instantiation SCN in Step 20.See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administratorAt the destination database, create a database link to the source database:
CREATE DATABASE LINK cvx1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'cvx1.net';
This database link is required because the next step runs the SET_GLOBAL_INSTANTIATION_SCN
procedure with the recursive parameter set to true
.
At the destination database, set the global instantiation SCN for the source database to the SCN value returned in Step 10.
For example, to set the global instantiation SCN to 46931285
for the cvx1.net
source database, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN( source_database_name => 'cvx1.net', instantiation_scn => 46931285, recursive => true); END; /
Notice that the recursive
parameter is set to true
to set the instantiation SCN for all schemas and tables in the destination database.
At the destination database, you can start any apply processes that you configured.
At the source database, start the propagation you stopped in Step 5:
BEGIN DBMS_PROPAGATION_ADM.START_PROPAGATION( propagation_name => 'cvx1_to_cvx2'); END; /
An instantiation SCN instructs an apply process at a destination database to apply changes to a database object that committed after a specific SCN at a source database. You can set instantiation SCNs in one of the following ways:
Export the relevant database objects at the source database and import them into the destination database. In this case, the export/import creates the database objects at the destination database, populates them with the data from the source database, and sets the relevant instantiation SCNs. You can use Data Pump export/import or original export/import for instantiations. See "Setting Instantiation SCNs Using Export/Import" for information about the instantiation SCNs that are set for different types of export/import operations.
Perform a metadata only export/import using Data Pump or original export/import. If you use Data Pump export/import, then set the CONTENT
parameter to METADATA_ONLY
during export at the source database or import at the destination database, or both. If you use original export/import, then set the ROWS
parameter to n
during export at the source database or import at the destination database, or both. In either case, instantiation SCNs are set for the database objects, but no data is imported. See "Setting Instantiation SCNs Using Export/Import" for information about the instantiation SCNs that are set for different types of export/import operations.
Use transportable tablespaces to copy the objects in one or more tablespaces from a source database to a destination database. An instantiation SCN is set for each schema in these tablespaces and for each database object in these tablespaces that was prepared for instantiation before the export. See "Instantiating Objects in a Tablespace Using Transportable Tablespace or RMAN".
Set the instantiation SCN using the SET_TABLE_INSTANTIATION_SCN
, SET_SCHEMA_INSTANATIATION_SCN
, and SET_GLOBAL_INSTANTIATION_SCN
procedures in the DBMS_APPLY_ADM
package. See "Setting Instantiation SCNs Using the DBMS_APPLY_ADM Package".
This section discusses setting instantiation SCNs by performing an export/import. The information in this section applies to both metadata export/import operations and to export/import operations that import rows. Also, you can use either Data Pump export/import or original export/import.
If you use the original Export utility, then set the OBJECT_CONSISTENT
export parameter to y
. Regardless of whether you use Data Pump export or original export, you can specify a more stringent degree of consistency by using an export parameter such as FLASHBACK_SCN
or FLASHBACK_TIME
. Also, if you use the original Import utility, then set the STREAMS_INSTANTIATION
import parameter to y
.
The following sections describe how the instantiation SCNs are set for different types of export/import operations. These sections refer to prepared tables. Prepared tables are tables that have been prepared for instantiation using the PREPARE_TABLE_INSTANTIATION
, PREPARE_SCHEMA_INSTANTIATION
, or PREPARE_GLOBAL_INSTANTIATION
procedures in the DBMS_CAPTURE_ADM
package. A table must be a prepared table before export in order for an instantiation SCN to be set for it during import. However, the database and schemas do not need to be prepared before the export in order for their instantiation SCNs to be set during import.
A full database export and full database import sets the following instantiation SCNs at the import database:
The database, or global, instantiation SCN
The schema instantiation SCN for each imported user
The table instantiation SCNs for each prepared table that is imported
A full database or user export and user import sets the following instantiation SCNs at the import database:
The schema instantiation SCN for each imported user
The table instantiation SCN for each prepared table that is imported
Any export that includes one or more tables and a table import sets the table instantiation SCN for each prepared table that is imported at the import database.
Note:
If a non-NULL
instantiation SCN already exists for a database object at a destination database that performs an import, then the import updates the instantiation SCN for that database object.
During an export for a Streams instantiation, make sure no DDL changes are made to objects being exported.
Any table supplemental logging specifications for the tables exported from the export database are retained when the tables are imported at the import database.
See Also:
"Oracle Data Pump and Streams Instantiation", "Original Export/Import and Streams Instantiation", and Oracle Database Utilities for information about using export/import
Part II, "Configuring Streams Replication" for more information about performing export/import operations to set instantiation SCNs when configuring a Streams environment
"Preparing Database Objects for Instantiation at a Source Database"
You can set an instantiation SCN at a destination database for a specified table, a specified schema, or an entire database using one of the following procedures in the DBMS_APPLY_ADM
package:
If you set the instantiation SCN for a schema using SET_SCHEMA_INSTANTIATION_SCN
, then you can set the recursive
parameter to true
when you run this procedure to set the instantiation SCN for each table in the schema. Similarly, if you set the instantiation SCN for a database using SET_GLOBAL_INSTANTIATION_SCN
, then you can set the recursive
parameter to true
when you run this procedure to set the instantiation SCN for the schemas in the database and for each table owned by these schemas.
Note:
If you set the recursive
parameter to true
in the SET_SCHEMA_INSTANTIATION_SCN
procedure or the SET_GLOBAL_INSTANTIATION_SCN
procedure, then a database link from the destination database to the source database is required. This database link must have the same name as the global name of the source database and must be accessible to the user who executes the procedure.
If a relevant instantiation SCN is not present, then an error is raised during apply.
Table 10-1 lists each procedure and the types of statements for which they set an instantiation SCN.
Table 10-1 Set Instantiation SCN Procedures and the Statements They Cover
Procedure | Sets Instantiation SCN for | Examples |
---|---|---|
|
DML and DDL statements on tables, except DDL statements on table indexes and table triggers |
|
|
DDL statements on users, except DDL statements on all database objects that have a non- |
|
|
DDL statements on database objects other than users with no owner DDL statements on database objects owned by public
|
|
The user who runs the examples in this section must have access to a database link from the source database to the destination database. In these example, the database link is hrdb2.net
. The following example sets the instantiation SCN for the hr.departments
table at the hrdb2.net
database to the current SCN by running the following procedure at the source database hrdb1.net
:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@HRDB2.NET( source_object_name => 'hr.departments', source_database_name => 'hrdb1.net', instantiation_scn => iscn); END; /
The following example sets the instantiation SCN for the oe
schema and all of its objects at the hrdb2.net
database to the current source database SCN by running the following procedure at the source database hrdb1.net
:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN@HRDB2.NET( source_schema_name => 'oe', source_database_name => 'hrdb1.net', instantiation_scn => iscn, recursive => true); END; /
Because the recursive
parameter is set to true
, running this procedure sets the instantiation SCN for each database object in the oe
schema.
Note:
When you set therecursive
parameter to true
, a database link from the destination database to the source database is required, even if you run the procedure while you are connected to the source database. This database link must have the same name as the global name of the source database and must be accessible to the current user.The user who runs the examples in this section must have access to a database link from the destination database to the source database. In these example, the database link is hrdb1.net
. The following example sets the instantiation SCN for the hr.departments
table at the hrdb2.net
database to the current source database SCN at hrdb1.net
by running the following procedure at the destination database hrdb2.net
:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER@HRDB1.NET; DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN( source_object_name => 'hr.departments', source_database_name => 'hrdb1.net', instantiation_scn => iscn); END; /
The following example sets the instantiation SCN for the oe
schema and all of its objects at the hrdb2.net
database to the current source database SCN at hrdb1.net
by running the following procedure at the destination database hrdb2.net
:
DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER@HRDB1.NET; DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN( source_schema_name => 'oe', source_database_name => 'hrdb1.net', instantiation_scn => iscn, recursive => true); END; /
Because the recursive
parameter is set to true
, running this procedure sets the instantiation SCN for each database object in the oe
schema.
Note:
If an apply process applies changes to a remote non-Oracle database, then set theapply_database_link
parameter to the database link used for remote apply when you set the instantiation SCN.See Also:
Part II, "Configuring Streams Replication" for more information when to set instantiation SCNs when you are configuring a Streams environment
Chapter 19, "Single-Source Heterogeneous Replication Example" and Chapter 20, "Multiple-Source Replication Example" for detailed examples that uses the SET_TABLE_INSTANTIATION_SCN
procedure
The information about the DBMS_APPLY_ADM
package in the Oracle Database PL/SQL Packages and Types Reference for more information about which instantiation SCN can be used for a DDL LCR