Oracle® Streams Replication Administrator's Guide 10g Release 2 (10.2) Part Number B14228-02 |
|
|
View PDF |
A Streams replication database is a database that participates in a Streams replication environment. A Streams replication environment uses Streams clients to replicate database changes from one database to another. Streams clients include capture processes, propagations, and apply processes. This chapter describes general best practices for Streams replication databases.
This chapter contains these topics:
For your Streams replication environment to run properly and efficiently, follow the best practices in this section when you are configuring the environment. This section contains these topics:
Certain initialization parameters are important in a Streams configuration. Make sure the initialization parameters are set properly at all databases before configuring a Streams replication environment.
See Also:
Oracle Streams Concepts and Administration for information about initialization parameters that are important in a Streams environmentThe following sections describe best practices for database storage in a Streams database:
Configure a Separate Tablespace for the Streams Administrator
Use a Separate Queue for Each Capture Process and Apply Process
Typically, the username for the Streams administrator is strmadmin
, but any user with the proper privileges can be a Streams administrator. The examples in this section use strmadmin
for the Streams administrator username.
Create a separate tablespace for the Streams administrator at each participating Streams database. This tablespace stores any objects created in the Streams administrator schema, including any spillover of messages from the buffered queues owned by the schema.
For example, to create a tablespace named streams_tbs
and assign it to the Streams administrator, log in as an administrative user, and run the following SQL statements:
CREATE TABLESPACE streams_tbs DATAFILE '/usr/oracle/dbs/streams_tbs.dbf' SIZE 25 M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; ALTER USER strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs;
Specify a valid path on your file system for the datafile in the CREATE
TABLESPACE
statement.
See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administratorConfigure a separate queue for each capture process and for each apply process, and make sure each queue has its own queue table. Using separate queues is especially important when configuring bidirectional replication between two databases or when a single database receives messages from several other databases.
For example, suppose a database called db1
is capturing changes that will be sent to other databases and is receiving changes from a database named db2
. The changes received from db2
are applied by an apply process running on db1
. In this scenario, create a separate queue for the capture process and apply process at db1
, and make sure these queues use different queue tables.
The following example creates the queue for the capture process. The queue name is capture_queue
, and this queue uses queue table qt_capture_queue
:
BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmadmin.qt_capture_queue', queue_name => 'strmadmin.capture_queue'); END; /
The following example creates the queue for the apply process. The queue name is apply_queue
, and this queue uses queue table qt_apply_queue
:
BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmadmin.qt_apply_queue', queue_name => 'strmadmin.apply_queue'); END; /
Subsequently, specify the queue strmadmin.capture_queue
when you configure the capture process at db1
, and specify the queue strmadmin.apply_queue
when you configure the apply process at db1
. If necessary, the SET_UP_QUEUE
procedure lets you specify a storage_clause
parameter to configure separate tablespace and storage specifications for each queue table.
To create capture and apply processes, the Streams administrator must have DBA privilege. An administrative user must explicitly grant DBA privilege to the Streams administrator. For example, the following statement grants DBA privilege to a Streams administrator named strmadmin
:
GRANT DBA TO strmadmin;
In addition, other privileges can be granted to the Streams administrator on each participating Streams database. Use the GRANT_ADMIN_PRIVILEGE
procedure in the DBMS_STREAMS_AUTH
package to grant these privileges. For example, running the following procedure grants privileges to a Streams administrator named strmadmin
:
exec DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
See Also:
Oracle Streams Concepts and Administration for information about configuring a Streams administratorUse the following procedures in the DBMS_STREAMS_ADM
package to create your Streams replication environment whenever possible:
MAINTAIN_GLOBAL
configures a Streams environment that replicates changes at the database level between two databases.
MAINTAIN_SCHEMAS
configures a Streams environment that replicates changes to specified schemas between two databases.
MAINTAIN_SIMPLE_TTS
clones a simple tablespace from a source database at a destination database and uses Streams to maintain this tablespace at both databases.
MAINTAIN_TABLES
configures a Streams environment that replicates changes to specified tables between two databases.
MAINTAIN_TTS
clones a set of tablespaces from a source database at a destination database and uses Streams to maintain these tablespaces at both databases.
PRE_INSTANTIATION_SETUP
and POST_INSTANTIATION_SETUP
configure a Streams environment that replicates changes either at the database level or to specified tablespaces between two databases. These procedures must be used together, and instantiation actions must be performed manually, to complete the Streams replication configuration.
These procedures automate the entire configuration of the Streams clients at multiple databases. Further, the configuration follows Streams best practices. For example, these procedures create queue-to-queue propagations whenever possible.
If these procedures are not suitable for your environment, then use the following procedures in the DBMS_STREAMS_ADM
package to create Streams clients, rules sets, and rules:
These procedures minimize the number of steps required to configure Streams clients. It is also possible to create rules for nonexistent objects, so make sure you check the spelling of each object specified in a rule carefully.
Although it is typically not recommended, a propagation or apply process can be used without rule sets or rules if you always want to propagate or apply all of the messages in a queue. However, a capture process requires one or more rule sets with rules. You can use the ADD_GLOBAL_RULES
procedure to capture DML changes to an entire database if a negative rule set is configured for the capture process to filter out changes to unsupported objects. You can also use the ADD_GLOBAL_RULES
procedure to capture all DDL changes to the database.
The rules in the rule set for a propagation can differ from the rules specified for a capture process. For example, to configure that all captured changes be propagated to a destination database, you can run the ADD_GLOBAL_PROPAGATION_RULES
procedure for the propagation even though multiple rules might have been configured using ADD_TABLE_RULES
for the capture process. Similarly, the rules in the rule set for an apply process can differ from the rules specified for the capture process and propagation(s) that capture and propagate messages to the apply process.
A Streams client can process changes for multiple tables or schemas. For the best performance, make sure the rules for these multiple tables or schemas are simple. Complex rules will impact the performance of Streams. For example, rules with conditions that include LIKE
clauses are complex. When you use a procedure in the DBMS_STREAMS_ADM
package to create rules, the rules are always simple.
When you configure multiple source databases in a Streams replication environment, change cycling should be avoided. Change cycling means sending a change back to the database where it originated. You can use Streams tags to prevent change cycling.
See Also:
"Configuring Replication Using the DBMS_STREAMS_ADM Package"
"Streams Tags in a Replication Environment" for information about using Streams tags to avoid change cycling
Oracle Streams Concepts and Administration for more information about simple and complex rules
After the Streams replication environment is configured, follow the best practices in this section to keep it running properly and efficiently. This section contains these topics:
Follow the Best Practices for the Global Name of a Streams Database
Follow the Best Practices for Removing a Streams Configuration at a Database
Streams uses the global name of a database to identify changes from or to a particular database. For example, the system-generated rules for capture, propagation, and apply typically specify the global name of the source database. In addition, changes captured by a Streams capture process automatically include the current global name of the source database. If possible, do not modify the global name of a database that is participating in a Streams replication environment after the environment has been configured. The GLOBAL_NAMES
initialization parameter must also be set to TRUE
to guarantee that database link names match the global name of each destination database.
If the global name of a Streams database must be modified, then do so at a time when no user changes are possible on the database, the queues are empty, and no outstanding changes must be applied by any apply process. When these requirements are met, you can modify the global name of a database and re-create the parts of the Streams configuration that reference the modified database. All queue subscribers, including propagations and apply processes, must be re-created if the source database global name is changed.
When replicating data definition language (DDL) changes, do not allow system-generated names for constraints or indexes. Modifications to these database objects will most likely fail at the destination database because the object names at the different databases will not match. Also, storage clauses may cause some issues if the destination databases are not identical. If you decide not to replicate DDL in your Streams environment, then any table structure changes must be performed manually at each database in the environment.
The number of messages in a queue used by a capture process can grow if the messages in the queue cannot be propagated to one or more destination queues. Source queue growth often indicates that there is a problem with the Streams replication environment. Common reasons why messages cannot be propagated include the following:
One of the destination databases is down for an extended period.
An apply process at a destination database is disabled for an extended period.
The queue is the source queue for a propagation that is unable to deliver the messages to a particular destination queue for an extended period due to network problems or propagation job problems.
When a capture process queue grows, the capture process pauses for flow control to minimize the number of messages that are spilled to disk. You can monitor the number of messages in a capture process queue by querying the V$BUFFERED_QUEUES
dynamic performance view. This view shows the number of messages in memory and the number of messages spilled to disk. You should monitor the queues used by a capture process to check for queue growth.
Propagation is implemented using the DBMS_JOB
package. If a job is unable to execute 16 successive times, the job is marked as "broken" and becomes disabled. Check propagation jobs periodically to make sure that they are running successfully to minimize source queue growth.
See Also:
"Restart Broken Propagations"The following sections contain information about best practices for backing up source databases and destination databases in a Streams replication environment. A single database can be both a source database and a destination database.
A source database is a database where changes captured by a capture process are generated in a redo log. Follow these best practices for backups of a Streams source database:
Use a Streams tag in the session that runs the online backup SQL statements to ensure that the capture process which captures changes to the source database will not capture the backup statements. An online backup statement uses the BEGIN
BACKUP
and END
BACKUP
clauses in an ALTER
TABLESPACE
or ALTER
DATABASE
statement. To set a Streams session tag, use the DBMS_STREAMS.SET_TAG
procedure.
Note:
Backups performed using Recovery Manager (RMAN) do not need to set a Streams session tag.See Also:
Chapter 4, "Streams Tags"Do not allow any automated backup of the archived logs that might remove archive logs required by a capture process. It is especially important in a Streams environment that all required archive log files remain available online and in the expected location until the capture process has finished processing them. If a log required by a capture process is unavailable, then the capture process will abort.
To list each required archive redo log file in a database, run the following query:
COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15 COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10 COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999 COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40 SELECT r.CONSUMER_NAME, r.SOURCE_DATABASE, r.SEQUENCE#, r.NAME FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c WHERE r.CONSUMER_NAME = c.CAPTURE_NAME AND r.NEXT_SCN >= c.REQUIRED_CHECKPOINT_SCN;
Ensure that all archive log files from all threads are available. Database recovery depends on the availability of these logs, and a missing log will result in incomplete recovery.
In situations that result in incomplete recovery (point-in-time recovery) at a source database, follow the instructions in "Performing Point-in-Time Recovery on the Source in a Single-Source Environment" or "Performing Point-in-Time Recovery in a Multiple-Source Environment".
In a Streams replication environment, a destination database is a database where an apply process applies changes. Follow these best practices for backups of a Streams destination database:
Ensure that the commit_serialization
apply process parameter is set to the default value of full
.
In situations that result in incomplete recovery (point-in-time recovery) at a destination database, follow the instructions in "Performing Point-in-Time Recovery on a Destination Database".
Every night by default, the optimizer automatically collects statistics on tables whose statistics have become stale. For volatile tables, such as Streams queue tables, it is likely that the statistics collection job runs when these tables might not have data that is representative of their full load period.
You create these volatile queue tables using the DBMS_AQADM.CREATE_QUEUE_TABLE
or DBMS_STREAMS_ADM.SETUP_QUEUE
procedure. You specify the queue table name when you run these procedures. In addition to the queue table, the following tables are created when the queue table is created and are also volatile:
AQ$_
queue_table_name
_I
AQ$_
queue_table_name
_H
AQ$_
queue_table_name
_T
AQ$_
queue_table_name
_P
AQ$_
queue_table_name
_D
AQ$_
queue_table_name
_C
Replace queue_table_name
with the name of the queue table.
Oracle recommends that you collect statistics on volatile tables by completing the following steps:
Run the DBMS_STATS.GATHER_TABLE_STATS
procedure manually on volatile tables when these tables are at their fullest.
Immediately after the statistics are collected on volatile tables, run the DBMS_STATS.LOCK_TABLE_STATS
procedure on these tables.
Locking the statistics on volatile tables ensures that the automatic statistics collection job skips these tables, and the tables are not analyzed.
See Also:
Oracle Database Performance Tuning Guide for more information about managing optimizer statisticsSTRMMON
is a monitoring tool for Oracle Streams. You can use this tool to obtain a quick overview of the Streams activity in a database. STRMMON
reports information in a single line display. You can configure the reporting interval and the number of iterations to display. STRMMON
is available in the rdbms/demo
directory in your Oracle home.
See Also:
Chapter 12, "Monitoring Streams Replication" and Oracle Streams Concepts and Administration for more information about monitoring a Streams environmentBy default, the alert log contains information about why Streams capture and apply processes stopped. Also, Streams capture and apply processes report long-running and large transactions in the alert log.
Long-running transactions are open transactions with no activity (that is, no new change records, rollbacks, or commits) for an extended period (20 minutes). Large transactions are open transactions with a large number of change records. The alert log reports whether a long-running or large transaction has been seen every 20 minutes. Not all such transactions are reported, because only one transaction is reported for each 20 minute period. When the commit or rollback is received, this information is reported in the alert log as well.
You can use the following views for information about long-running transactions:
The V$STREAMS_TRANSACTION
dynamic performance view enables monitoring of long running transactions that are currently being processed by Streams capture processes and apply processes.
The DBA_APPLY_SPILL_TXN
and V$STREAMS_APPLY_READER
views enable you to monitor the number of transactions and messages spilled by an apply process.
See Also:
Oracle Streams Concepts and Administration for more information about Streams information in the alert logIf you want to completely remove the Streams configuration at a database, then complete the following steps:
Connect to the database as an administrative user, and run the DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION
procedure.
Drop the Streams administrator, if possible.
The following best practices are for Real Application Clusters (RAC) databases in Streams replication environments:
Make Archive Log Files of All Threads Available to Capture Processes
Follow the Best Practices for the Global Name of a Streams RAC Database
Follow the Best Practices for Configuring and Managing Propagations
See Also:
Oracle Streams Concepts and Administration for more information about how Streams works with RACThe archive log files of all threads from all instances must be available to any instance running a capture process. This requirement pertains to both local and downstream capture processes.
The general best practices described in "Follow the Best Practices for the Global Name of a Streams Database" also apply to RAC databases in a Streams environment. In addition, if the global name of a RAC destination database does not match the DB_NAME.DB_DOMAIN
of the database, then include the global name for the database in the list of services for the database specified by the SERVICE_NAMES
initialization parameter.
In the tnsnames.ora
file, make sure the CONNECT_DATA
clause in the connect descriptor specifies the global name of the destination database for the SERVICE_NAME
. Also, make sure the CONNECT_DATA
clause does not include the INSTANCE_NAME
parameter.
If the global name of a RAC database that contains Streams propagations is changed, then drop and re-create all propagations. Make sure the new propagations are queue-to-queue propagations by setting the queue_to_queue
parameter set to true
during creation.
If the global name of a RAC destination database must be changed, then ensure that the queue used by each apply process is empty and that there are no unapplied transactions before changing the global name. After the global name is changed, drop and re-create each apply process queue and each apply process.
See Also:
"Follow the Best Practices for Queue Ownership" for more information about theSERVICE_NAME
parameter in the tnsnames.ora
fileThe general best practices described in "Restart Broken Propagations" also apply to RAC databases in a Streams environment. Use the procedures START_PROPAGATION
and STOP_PROPAGATION
in the DBMS_PROPAGATION_ADM
package to start and stop propagations. These procedures automatically handle queue-to-queue propagation.
Also, on a Real Application Clusters (RAC) database, a service is created for each buffered queue. This service always runs on the owner instance of the destination queue and follows the ownership of this queue upon queue ownership switches, which include instance startup, instance shutdown, and so on. This service is used by queue-to-queue propagations. You can query NETWORK_NAME
column of the DBA_SERVICES
data dictionary view to determine the service name for a queue-to-queue propagation. If you are running RAC instances, and you have queues that were created prior to Oracle Database 10g Release 2, then drop and re-create these queues to take advantage of the automatic service generation and queue-to-queue propagation. Make sure you re-create these queues when they are empty and no new messages are being enqueued into them.
See Also:
"Use Queue-to-Queue Propagations"All Streams processing is done at the owning instance of the queue used by the Streams client. To determine the owning instance of each ANYDATA
queue in a database, run the following query:
SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, t.OWNER_INSTANCE FROM DBA_QUEUES q, DBA_QUEUE_TABLES t WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND q.QUEUE_TABLE = t.QUEUE_TABLE AND q.OWNER = t.OWNER;
When Streams is configured in a RAC environment, each queue table has an owning instance. Also, all queues within an individual queue table are owned by the same instance. The Streams clients all use the owning instance of the relevant queue to perform their work:
Each capture process is run at the owning instance of its queue.
Each propagation is run at the owning instance of the propagation's source queue.
Each propagation must connect to the owning instance of the propagation's destination queue.
Each apply process is run at the owning instance of its queue.
You can configure ownership of a queue to remain on a specific instance, as long as that instance is available, by running the DBMS_AQADM.ALTER_QUEUE_TABLE
procedure and setting the primary_instance
and secondary_instance
parameters. When the primary instance of a queue table is set to a specific instance, the queue ownership will return to the specified instance whenever the instance is running.
Capture processes and apply processes automatically follow the ownership of the queue. If the ownership changes while process is running, then the process stops on the current instance and restarts on the new owner instance.
Queue-to-queue propagations send messages only to the specific queue identified as the destination queue. Also, the source database link for the destination database connect descriptor must specify the correct service to connect to the destination database. The CONNECT_DATA
clause in the connect descriptor should specify the global name of the destination database for the SERVICE_NAME
.
For example, consider the tnsnames.ora
file for a database with the global name db.mycompany.com
. Assume that the alias name for the first instance is db1
and that the alias for the second instance is db2
. The tnsnames.ora
file for this database might include the following entries:
db.mycompany.com= (description= (load_balance=on) (address=(protocol=tcp)(host=node1-vip)(port=1521)) (address=(protocol=tcp)(host=node2-vip)(port=1521)) (connect_data= (service_name=db.mycompany.com))) db1.mycompany.com= (description= (address=(protocol=tcp)(host=node1-vip)(port=1521)) (connect_data= (service_name=db.mycompany.com) (instance_name=db1))) db2.mycompany.com= (description= (address=(protocol=tcp)(host=node2-vip)(port=1521)) (connect_data= (service_name=db.mycompany.com) (instance_name=db2)))