Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2) Part Number B14239-04 |
|
|
View PDF |
This chapter contains the following topics:
Views Related to Managing and Monitoring a Logical Standby Database
Managing Specific Workloads In the Context of a Logical Standby Database
SQL Apply uses a collection of parallel execution servers and background processes to apply changes from the primary database to the logical standby database.
Figure 9-1 shows the flow of information and the role that each process performs.
The different processes involved and their functions during log mining and apply processing are as follows:
During log mining:
The READER
process reads redo records from the archived redo log files or standby redo log files.
The PREPARER
process converts block changes contained in redo records into logical change records (LCRs). Multiple PREPARER
processes can be active for a given archived redo log file or standby redo log file. The LCRs are staged in the shared pool of the system global area (SGA), known as the LCR cache.
The BUILDER
process groups LCRs into transactions, and performs other tasks, such as memory management in the LCR cache, checkpointing related to SQL Apply restart and filtering out of uninteresting changes.
During apply processing:
The ANALYZER
process examines the transaction chunks containing a group of LCRs, possibly filtering out uninteresting transactions, and identifying dependencies between different transactions.
The COORDINATOR
process (LSP):
Assigns transactions
Monitors dependencies between transactions and coordinates scheduling
Authorizes the commitment of changes to the logical standby database
Applies the LCRs to the database
Asks the COORDINATOR
process to approve transactions with unresolved dependencies
Commits the transactions
You can query the V$LOGSTDBY_PROCESS
view to examine the activity of the SQL Apply processes. Another view that provides information about current activity is the V$LOGSTDBY_STATS
view that displays statistics, current state, and status information for the logical standby database during SQL Apply activities. These and other relevant views are discussed in more detail in Section 9.2, "Views Related to Managing and Monitoring a Logical Standby Database".
This section contains the following topics:
SQL Apply categorizes transactions into two classes: small and large:
Small transactions—SQL Apply starts applying LCRs belonging to a small transaction once it has encountered the commit record for the transaction in the redo log files.
Large transactions—SQL Apply breaks large transactions into smaller pieces called transaction chunks, and starts applying the chunks before the commit record for the large transaction is seen in the redo log files. This is done to reduce memory pressure on the LCR cache and to reduce the overall failover time.
For example, without breaking into smaller pieces, a SQL*Loader load of ten million rows, each 100 bytes in size, would use more than 1 GB of memory in the LCR cache. If the memory allocated to the LCR cache was less than 1 GB, it would result in pageouts from the LCR cache.
Apart from the memory considerations, if SQL Apply did not start applying the changes related to the ten million row SQL*Loader load until it encountered the COMMIT
record for the transaction, it could stall a role transition. A switchover or a failover that is initiated after the transaction commit cannot finish until SQL Apply has applied the transaction on the logical standby database.
All transactions start out categorized as small transactions. Depending on the amount of memory available for the LCR cache and the amount of memory consumed by LCRs belonging to a transaction, SQL Apply determines when to recategorize a transaction as a large transaction.
Pageouts occur in the context of SQL Apply when memory in the LCR cache is exhausted and space needs to be released for SQL Apply to make progress.
For example, assume the memory allocated to the LCR cache is 100 MB and SQL Apply encounters an INSERT
transaction to a table with a LONG
column of size 300 MB. In this case, the log-mining component will page out the first part of the LONG
data to read the later part of the column modification. In a well-tuned logical standby database, pageout activities will occur occasionally and should not effect the overall throughput of the system.
See Also:
See Section 9.4, "Customizing a Logical Standby Database" for more information about how to identify problematic pageouts and perform corrective actionsModifications made to the logical standby database do not become persistent until the commit record of the transaction is mined from the redo log files and applied to the logical standby database. Thus, every time SQL Apply is stopped, whether as a result of a user directive or because of a system failure, SQL Apply must go back and mine the earliest uncommitted transaction again.
In cases where a transaction does little work but remains open for a long period of time, restarting SQL Apply is prohibitively costly. This is because SQL Apply may have to mine a large number of archived redo log files again, just to read the redo data for a few uncommitted transactions. To mitigate this, SQL Apply periodically checkpoints old uncommitted data. The SCN at which the checkpoint is taken is reflected in the RESTART_SCN
column of V$LOGSTDBY_PROGRESS
view. Upon restarting, SQL Apply starts mining redo records that are generated at an SCN greater than value shown by the RESTART_SCN
column. Archived redo log files that are not needed for restart are automatically deleted by SQL Apply.
Certain workloads, such as large DDL transactions, parallel DML statements (PDML), and direct-path loads, will prevent the RESTART_SCN
from advancing for the duration of the workload.
SQL Apply has the following characteristics when applying DML transactions that affect the throughput and latency on the logical standby database:
Batch updates or deletes done on the primary database, where a single statement results in multiple rows being modified, are applied as individual row modifications on the logical standby database. Thus, it is imperative for each maintained table to have a unique or a primary key. See Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified" for more information.
Direct path inserts performed on the primary database are applied using a conventional INSERT
statement on the logical standby database.
Parallel DML (PDML) transactions are not executed in parallel on the logical standby database.
SQL Apply has the following characteristics when applying DDL transactions that affect the throughput and latency on the logical standby database:
Parallel DDL (PDDL) transactions are not performed in parallel on a logical standby database.
DDL transactions are applied serially on the logical standby database. Thus, DDL transactions applied concurrently on the primary database are applied one at a time on the logical standby database.
CREATE TABLE AS SELECT
(CTAS) statements are executed such that the DML activities (that are part of the CTAS statement) are suppressed on the logical standby database. The rows inserted in the newly created table as part of the CTAS statement are mined from the redo log files and applied to the logical standby database using INSERT
statements.
The following performance views monitor the behavior of SQL Apply maintaining a logical standby database. The following sections describe the key views that can be used to monitor a logical standby database:
The DBA_LOGSTDBY_EVENTS
view record interesting events that occurred during the operation of SQL Apply. By default, the view records the most recent 100 events. However, you can change the number of recorded events by calling DBMS_LOGSTDBY.APPLY_SET()
PL/SQL procedure. If SQL Apply should stop unexpectedly, the reason for the problem is also recorded in this view.
Note:
Errors that cause SQL Apply to stop are recorded in the events table These events are put into theALERT.LOG
file as well, with the LOGSTDBY
keyword included in the text. When querying the view, select the columns in order by EVENT_TIME_STAMP
, COMMIT_SCN
, and CURRENT_SCN
. This ordering ensures a shutdown failure appears last in the view.The view also contains other information, such as which DDL transactions were applied and which were skipped. For example:
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> COLUMN STATUS FORMAT A60 SQL> SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS 2 ORDER BY EVENT_TIMESTAMP, COMMIT_SCN; EVENT_TIME STATUS ------------------------------------------------------------------------------ EVENT ------------------------------------------------------------------------------- 23-JUL-02 18:20:12 ORA-16111: log mining and apply setting up 23-JUL-02 18:25:12 ORA-16128: User initiated shut down successfully completed 23-JUL-02 18:27:12 ORA-16112: log mining and apply stopping 23-JUL-02 18:55:12 ORA-16128: User initiated shut down successfully completed 23-JUL-02 18:57:09 ORA-16111: log mining and apply setting up 23-JUL-02 20:21:47 ORA-16204: DDL successfully applied create table hr.test_emp (empno number, ename varchar2(64)) 23-JUL-02 20:22:55 ORA-16205: DDL skipped due to skip setting create database link link_to_boston connect to system identified by change_on_inst 7 rows selected.
This query shows that SQL Apply was started and stopped a few times. It also shows what DDL was applied and skipped. If SQL Apply had stopped, the last record in the query would have shown the cause of the problem.
The DBA_LOGSTDBY_LOG
view provides dynamic information about archived logs being processed by SQL Apply.
For example:
SQL> COLUMN DICT_BEGIN FORMAT A10; SQL> SET NUMF 9999999 SQL> SELECT FILE_NAME, SEQUENCE# AS SEQ#, FIRST_CHANGE# AS FCHANGE#, - NEXT_CHANGE# AS NCHANGE#, TIMESTAMP, - DICT_BEGIN AS BEG, DICT_END AS END, - THREAD# AS THR# FROM DBA_LOGSTDBY_LOG - ORDER BY SEQUENCE#; FILE_NAME SEQ# F_SCN N_SCN TIMESTAM BEG END THR# APPLIED ------------------------- ---- ------- ------- -------- --- --- --- --------- /oracle/dbs/hq_nyc_2.log 2 101579 101588 11:02:58 NO NO 1 YES /oracle/dbs/hq_nyc_3.log 3 101588 142065 11:02:02 NO NO 1 YES /oracle/dbs/hq_nyc_4.log 4 142065 142307 11:02:10 NO NO 1 YES /oracle/dbs/hq_nyc_5.log 5 142307 142739 11:02:48 YES YES 1 YES /oracle/dbs/hq_nyc_6.log 6 142739 143973 12:02:10 NO NO 1 YES /oracle/dbs/hq_nyc_7.log 7 143973 144042 01:02:11 NO NO 1 YES /oracle/dbs/hq_nyc_8.log 8 144042 144051 01:02:01 NO NO 1 YES /oracle/dbs/hq_nyc_9.log 9 144051 144054 01:02:16 NO NO 1 YES /oracle/dbs/hq_nyc_10.log 10 144054 144057 01:02:21 NO NO 1 YES /oracle/dbs/hq_nyc_11.log 11 144057 144060 01:02:26 NO NO 1 CURRENT /oracle/dbs/hq_nyc_12.log 12 144060 144089 01:02:30 NO NO 1 CURRENT /oracle/dbs/hq_nyc_13.log 13 144089 144147 01:02:41 NO NO 1 NO
The YES
entries in the BEG
and END
columns indicate that a LogMiner dictionary build starts at log file sequence number 5. The most recent archived redo log file is sequence number 13, and it was received at the logical standby database at 01:02:41.The APPLIED
column indicates that SQL Apply has applied all redo before SCN 144057. Since transactions can span multiple archived log files, multiple archived log files may show the value CURRENT
in the APPLIED
column.
This view provides information related to the failover characteristics of the logical standby database, including:
The time to failover (apply finish time
)
How current is the committed data in the logical standby database (lag time
)
What the potential data loss will be in the event of a disaster (potential data loss
).
For example:
SQL> SELECT NAME, VALUE, TIME_COMPUTED FROM V$LOGSTDBY_STATS; NAME VALUE TIME_COMPUTED ------------------ -------------- --------------------- apply finish time +00 00:00:00.1 07-APR-2005 08:29:23 lag time +00 00:00:00.1 07-APR-2005 08:29:23 potential data loss +00 00:00:00 07-APR-2005 08:29:23
The unit (metric) of each of the columns displayed is in day (2) to second (1) interval. The output identifies a logical standby database that is caught up within 0.1 second of the primary database, and no data loss will occur in the event of a primary failure.
This view provides information about the current state of the various processes involved with SQL Apply, including;
Identifying information (sid
| serial#
| spid
)
SQL Apply process: COORDINATOR
, READER
, BUILDER
, PREPARER
, ANALYZER
, or APPLIER
(type
)
Status of the process's current activity (status_code
| status
)
Highest redo record processed by this process (high_scn
)
For example:
SQL> COLUMN LID FORMAT 9999 SQL> COLUMN SERIAL# FORMAT 9999 SQL> COLUMN SID FORMAT 9999 SQL> SELECT SID, SERIAL#, LOGSTDBY_ID AS LID, SPID, TYPE, HIGH_SCN FROM V$LOGSTDBY_PROCESS; SID SERIAL# LID SPID TYPE HIGH_SCN ----- ------- ----- ------------ ---------------- ---------- 48 6 -1 11074 COORDINATOR 7178242899 56 56 0 10858 READER 7178243497 46 1 1 10860 BUILDER 7178242901 45 1 2 10862 PREPARER 7178243295 37 1 3 10864 ANALYZER 7178241034 36 1 4 10866 APPLIER 7178239467 35 3 5 10868 APPLIER 7178239463 34 7 6 10870 APPLIER 7178239461 33 1 7 10872 APPLIER 7178239472 9 rows selected.
The HIGH_SCN
column shows that the reader process is ahead of all other processes, and the PREPARER
and BUILDER
process ahead of the rest.
SQL> COLUMN STATUS FORMAT A40 SQL> SELECT TYPE, STATUS_CODE, STATUS FROM V$LOGSTDBY_PROCESS; TYPE STATUS_CODE STATUS ---------------- ----------- ----------------------------------------- COORDINATOR 16117 ORA-16117: processing READER 16127 ORA-16127: stalled waiting for additional transactions to be applied BUILDER 16116 ORA-16116: no work available PREPARER 16116 ORA-16117: processing ANALYZER 16120 ORA-16120: dependencies being computed for transaction at SCN 0x0001.abdb440a APPLIER 16124 ORA-16124: transaction 1 13 1427 is waiting on another transaction APPLIER 16121 ORA-16121: applying transaction with commit SCN 0x0001.abdb4390 APPLIER 16123 ORA-16123: transaction 1 23 1231 is waiting for commit approval APPLIER 16116 ORA-16116: no work available
The output shows a snapshot of SQL Apply running. On the mining side, the READER
process is waiting for additional memory to become available before it can read more, the PREPARER
process is processing redo records, and the BUILDER
process has no work available. On the apply side, the COORDINATOR
is assigning more transactions to APPLIER
processes, the ANALYZER
is computing dependencies at SCN 7178241034, one APPLIER
has no work available, while two have outstanding dependencies that are not yet satisfied.
See Also:
V$LOGSTDBY_PROCESS
view in Oracle Database Reference for reference information and Section 9.3.1, "Monitoring SQL Apply Progress" for example outputThis view provides detailed information regarding progress made by SQL Apply, including:
SCN or time at which all transactions that have been committed on the primary database have been applied to the logical standby database (applied_scn
| applied_time
)
SCN or time at which SQL Apply would begin reading redo records (restart_scn
| restart_time
) on restart
SCN or time of the latest redo record received on the logical standby database (latest_scn
| latest_time
)
SCN or time of the latest record processed by the BUILDER
process (mining_scn
| mining_time
)
For example:
SQL> SELECT APPLIED_SCN, LATEST_SCN, MINING_SCN, RESTART_SCN FROM V$LOGSTDBY_PROGRESS; APPLIED_SCN LATEST_SCN MINING_SCN RESTART_SCN ----------- ----------- ---------- ----------- 7178240496 7178240507 7178240507 7178219805
According to the output:
SQL Apply has applied all transactions committed on or before SCN of 7178240496
The latest redo record received at the logical standby database was generated at SCN 7178240507
The mining component has processed all redo records generate on or before SCN 7178240507
If SQL Apply stops and restarts for any reason, it will start mining redo records generated on or after SCN 7178219805
SQL> ALTER SESSION SET NLS_DATE_FORMAT='yy-mm-dd hh24:mi:ss'; Session altered SQL> SELECT APPLIED_TIME, LATEST_TIME, MINING_TIME, RESTART_TIME FROM V$LOGSTDBY_PROGRESS; APPLIED_TIME LATEST_TIME MINING_TIME RESTART_TIME ----------------- ----------------- ----------------- ----------------- 05-05-12 10:38:21 05-05-12 10:41:21 05-05-12 10:41:53 05-05-12 10:09:30
According to the output:
SQL Apply has applied all transactions committed on or before the time 05-05-12 10:38:21 (APPLIED_TIME
)
The last redo was generated at time 05-05-12 10:41:53 at the primary database (LATEST_TIME
)
The mining engine has processed all redo records generated on or before 05-05-12 10:41:21 (MINING_TIME
)
In the event of a restart, SQL Apply will start mining redo records generated after the time 05-05-12 10:09:30
See Also:
V$DATAGUARD_PROGRESS
view in Oracle Database Reference for reference information and Section 9.3.1, "Monitoring SQL Apply Progress" for example outputThis view provides a synopsis of the current state of SQL Apply, including:
The DBID of the primary database (primary_dbid
).
The LogMiner session ID allocated to SQL Apply (session_id
).
Whether or not SQL Apply is applying in real time (realtime_apply
).
Where SQL Apply is currently with regard to loading the LogMiner Multiversioned Data Dictionary (described in Section 4.2.3.2, "Build a Dictionary in the Redo Data"), receiving redo from the primary database, and applying redo data (STATE
)
For example:
SQL> COLUMN REALTIME_APPLY FORMAT a15 SQL> COLUMN STATE FORMAT a16 SQL> SELECT * FROM V$LOGSTDBY_STATE; PRIMARY_DBID SESSION_ID REALTIME_APPLY STATE ------------ ---------- --------------- ---------------- 1562626987 1 Y APPLYING
The output shows that SQL Apply is running in the real-time apply mode and is currently applying redo data received from the primary database, the primary database's DBID
is 1562626987 and the LogMiner session identifier associated the SQL Apply session is 1.
See Also:
V$LOGSTDBY_STATE
view in Oracle Database Reference for reference information and Section 9.3.1, "Monitoring SQL Apply Progress" for example outputThis view provides SQL Apply statistics.
For example:
SQL> COLUMN NAME FORMAT a32 SQL> COLUMN VALUE FORMAT a32 SQL> SELECT * FROM V$LOGSTDBY_STATS; NAME VALUE -------------------------------- -------------------------------- number of preparers 1 number of appliers 4 maximum SGA for LCR cache 30 parallel servers in use 8 maximum events recorded 1000 preserve commit order TRUE record skip errors Y record skip DDL Y record applied DDL N record unsupported operations N coordinator state APPLYING transactions ready 132412 transactions applied 132118 coordinator uptime 132102 realtime logmining Y apply delay 0 Log Miner session ID 1 bytes of redo processed 130142100140 txns delivered to client 131515 DML txns delivered 128 DDL txns delivered 23 CTAS txns delivered 0 Recursive txns delivered 874 Rolled back txns seen 40 LCRs delivered to client 2246414 bytes paged out 0 secs spent in pageout 0 bytes checkpointed 0 secs spent in checkpoint 0 bytes rolled back 0 secs spent in rollback 0 secs system is idle 2119 32 rows selected.
This section contains the following topics:
SQL Apply can be in any of six states of progress: initializing SQL Apply, waiting for dictionary logs, loading the LogMiner Multiversioned Data Dictionary, applying (redo data), waiting for an archive gap to be resolved, and idle. Figure 9-2 shows the flow of these states.
Figure 9-2 Progress States During SQL Apply Processing
The following subsections describe each state in more detail.
When you start SQL Apply by issuing ALTER DATABASE START LOGICAL STANDBY APPLY
statement, it goes in the initializing state.
To determine the current state of SQL Apply, query the V$LOGSTDBY_STATE
view. For example:
SQL> SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE; SESSION_ID STATE ---------- ------------- 1 INITIALIZING
The SESSION_ID
column identifies the persistent LogMiner session created by SQL Apply to mine the archived redo log files generated by the primary database.
Waiting for Dictionary Logs
The first time the SQL Apply is started, it needs to load the LogMiner MultiVersioned Data Dictionary captured in the redo log files. SQL Apply will stay in the WAITING FOR DICTIONARY LOGS
state until it has received all redo data required to load the LogMiner MultiVersioned Data Dictionary.
Loading Dictionary State
This loading dictionary state can persist for a while. Loading the LogMiner multiversioned data dictionary on a large database can take a long time. Querying the V$LOGSTDBY_STATE
view returns the following output when loading the dictionary:
SQL> SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE; SESSION_ID STATE ---------- ------------------ 1 LOADING DICTIONARY
Only the COORDINATOR
process and the mining processes are spawned until the LogMiner dictionary is fully loaded. Therefore, if you query the V$LOGSTDBY_PROCESS
at this point, you will not see any of the APPLIER
processes. For example:
SQL> SELECT SID, SERIAL#, SPID, TYPE FROM V$LOGSTDBY_PROCESS; SID SERIAL# SPID TYPE ------ --------- --------- --------------------- 47 3 11438 COORDINATOR 50 7 11334 READER 45 1 11336 BUILDER 44 2 11338 PREPARER 43 2 11340 PREPARER
You can get more detailed information about the progress in loading the dictionary by querying the V$LOGMNR_DICTIONARY_LOAD
view. The dictionary load happens in three phases:
The relevant archived redo log files or standby redo logs files are mined to gather the redo changes relevant to load the LogMiner multiversioned data dictionary.
The changes are processed and loaded in staging tables inside the database.
The LogMiner multiversioned data dictionary tables are loaded by issuing a series of DDL statements.
For example:
SQL> SELECT PERCENT_DONE, COMMAND FROM V$LOGMNR_DICTIONARY_LOAD WHERE SESSION_ID = (SELECT SESSION_ID FROM V$LOGSTDBY_STATE); PERCENT_DONE COMMAND ------------- ------------------------------- 40 alter table SYSTEM.LOGMNR_CCOL$ exchange partition P101 with table SYS.LOGMNRLT_101_CCOL$ excluding indexes without validation
If the PERCENT_DONE
or the COMMAND
column does not change for a long time, query the V$SESSION_LONGOPS
view to monitor the progress of the DDL transaction in question.
In this state, SQL Apply has successfully loaded the initial snapshot of the LogMiner multiversioned data dictionary, and is currently applying redo data to the logical standby database.
For detailed information about the SQL Apply progress, query the V$LOGSTDBY_PROGRESS
view:
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS'; SQL> SELECT APPLIED_TIME, APPLIED_SCN, MINING_TIME, MINING_SCN, FROM V$LOGSTDBY_PROGRESS; APPLIED_TIME APPLIED_SCN MINING_TIME MINING_SCN -------------------- ----------- -------------------- ----------- 10-JAN-2005 12:00:05 346791023 10-JAN-2005 12:10:05 3468810134
All committed transactions seen at or before APPLIED_SCN
(or APPLIED_TIME
) on the primary database have been applied to the logical standby database. The mining engine has processed all redo records generated at or before MINING_SCN
(and MINING_TIME
) on the primary database. At steady state, the value of MINING_SCN
(and MINING_TIME
) will always be ahead of APPLIED_SCN
(and APPLIED_TIME
).
This state occurs when SQL Apply has mined and applied all available redo records, and is waiting for a new log file (or a missing log file) to be archived by the RFS process.
SQL> SELECT STATUS FROM V$LOGSTBDY_PROCESS WHERE TYPE = 'READER'; STATUS ------------------------------------------------------------------------ ORA:01291 Waiting for logfile
SQL Apply enters this state once it has applied all redo generated by the primary database.
SQL Apply automatically deletes archived redo log files when they are no longer needed.
This behavior can be overridden by executing the following PL/SQL procedure:
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('LOG_AUTO_DELETE', FALSE);
Note:
By default, SQL Apply will delete archived redo log files that it no longer needs. If you flash back the logical standby database, it may bring the logical standby database to a state, where an archived redo log file is present in SQL Apply metadata (as reflected in theDBA_LOGSTDBY_LOGS
view) but absent in the file system. An attempt to restart SQL Apply following a Flashback Database operation may fail with the following error in the alert log:
Errors in file /home/oracle/DGR2/logical/stdl/bdump/stdl_lsp0_11310.trc: ORA-00308: cannot open archived log '/home/oracle/DGR2/logical/stdl/stlog/1_15_559399019.dbf' ORA-27037: unable to obtain file status
You need to copy the archived redo log files that have been deleted by the automatic deletion policy to the appropriate directory and restart SQL Apply.
Although SQL Apply automatically deletes archived redo log files when they are no longer needed on the logical standby database, there may be times when you want to manually remove them (for example, to reclaim disk space).
If you are overriding the default automatic log deletion capability, perform the following steps to identify and delete archived redo log files that are no longer needed by SQL Apply:
To purge the logical standby session of metadata that is no longer needed, enter the following PL/SQL statement:
SQL> EXECUTE DBMS_LOGSTDBY.PURGE_SESSION;
This statement also updates the DBA_LOGMNR_PURGED_LOG
view that displays the archived redo log files that are no longer needed.
Query the DBA_LOGMNR_PURGED_LOG
view to list the archived redo log files that can be removed:
SQL> SELECT * FROM DBA_LOGMNR_PURGED_LOG; FILE_NAME ------------------------------------ /boston/arc_dest/arc_1_40_509538672.log /boston/arc_dest/arc_1_41_509538672.log /boston/arc_dest/arc_1_42_509538672.log /boston/arc_dest/arc_1_43_509538672.log /boston/arc_dest/arc_1_44_509538672.log /boston/arc_dest/arc_1_45_509538672.log /boston/arc_dest/arc_1_46_509538672.log /boston/arc_dest/arc_1_47_509538672.log
Use an operating system-specific command to delete the archived redo log files listed by the query.
This section contains the following topics:
Customizing Logging of Events in the DBA_LOGSTDBY_EVENTS View
Using DBMS_LOGSTDBY.SKIP to Prevent Changes to Specific Schema Objects
By default, Data Guard waits for the full archived redo log file to arrive on the standby database before applying it to the standby database. However, if you have configured a standby redo log on the standby database, you can optionally enable real-time apply. With real-time apply enabled, SQL Apply applies redo data from standby redo log files at the same time the log files are being written to, as opposed to applying from an archived redo log file after a log switch occurs. Immediately applying standby redo log files in this manner keeps the logical standby database closely caught up with the primary database, without requiring the standby redo log files to be archived at the standby site. This can result in quicker switchovers and failovers.
To start real-time apply on the logical standby database, issue the following statement:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Oracle recommends that you run SQL Apply in the real-time apply mode. See also Section 3.1.3, "Configure a Standby Redo Log" for more information about configuring a standby redo log.
The DBA_LOGSTDBY_EVENTS
view can be thought of as a circular log containing the most recent interesting events that occurred in the context of SQL Apply. By default the last 100 events are remembered in the event view. You can change the number of events logged by invoking the DBMS_LOGSTDBY.APPLY_SET
procedure. For example, to ensure that the last 10,000 events are recorded, you can issue the following statement:
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET ('MAX_EVENTS_RECORDED', '10000');
Note:
In Oracle Database 10g release 1 (10.1), theDBMS_LOGSTDBY.MAX_EVENTS
constant was called DBMS_LOGSTDBY_PUBLIC.MAX_EVENTS
. The effect of the two constants is the same, but in release 2 (10.2) the DBMS_LOGSTDBY_PUBLIC
package has been eliminated and the definition of the constant moved to the DBMS_LOGSTDBY
package.Additionally, you can specify what types of events are recorded in the view. For example, to record applied DDL transactions to the DBA_LOGSTDBY_EVENTS
view, issue the following statement:
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET ('RECORD_APPLIED_DDL', 'TRUE');
Errors that cause SQL Apply to stop are always recorded in the events view (unless there is insufficient space in the system tablespace). These events are always put into the ALERT.LOG
file as well, with the keyword LOGSTDBY
included in the text. When querying the view, select the columns in order by EVENT_TIME
, COMMIT_SCN
, and CURRENT_SCN
. This ordering ensures a shutdown failure appears last in the view.
By default, all supported tables in the primary database are replicated in the logical standby database. You can change the default behavior by specifying rules to skip applying modifications to specific tables. For example, to omit changes to the HR.EMPLOYEES
table, you can specify rules to prevent application of DML and DDL changes to the specific table. For example:
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Register the SKIP
rules:
SQL> EXECUTE DBMS_LOGSTDBY.SKIP (stmt => 'DML', schema_name => 'HR', - object_name => 'EMPLOYEES', proc_name => null); SQL> EXECUTE DBMS_LOGSTDBY.SKIP (stmt => 'SCHEMA_DDL', schema_name => 'HR', - object_name => 'EMPLOYEES', proc_name => null);
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
You can create a procedure to intercept certain DDL statements and replace the original DDL statement with a different one. For example, if the file system organization in the logical standby database is different than that in the primary database, you can write a DBMS_LOGSTDBY.SKIP
procedure to transparently handle DDL transactions with file specifications.
The following procedure can handle different file system organization between the primary database and standby database, as long as you use a specific naming convention for your file-specification string.
Create the skip procedure to handle tablespace DDL transactions:
CREATE OR REPLACE PROCEDURE SYS.HANDLE_TBS_DDL ( OLD_STMT IN VARCHAR2, STMT_TYP IN VARCHAR2, SCHEMA IN VARCHAR2, NAME IN VARCHAR2, XIDUSN IN NUMBER, XIDSLT IN NUMBER, XIDSQN IN NUMBER, ACTION OUT NUMBER, NEW_STMT OUT VARCHAR2 ) AS BEGIN -- All primary file specification that contains a directory -- /usr/orcl/primary/dbs -- should go to /usr/orcl/stdby directory specification NEW_STMT = REPLACE(OLD_STMT, '/usr/orcl/primary/dbs', '/usr/orcl/stdby'); ACTION := DBMS_LOGSTDBY.SKIP_ACTION_REPLACE; EXCEPTION WHEN OTHERS THEN ACTION := DBMS_LOGSTDBY.SKIP_ACTION_ERROR; NEW_STMT := NULL; END HANDLE_TBS_DDL;
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Register the skip procedure with SQL Apply:
SQL> EXECUTE DBMS_LOGSTDBY.SKIP (stmt => 'TABLESPACE', - proc_name => 'sys.handle_tbs_ddl');
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Logical standby database can be used for reporting activities, even while SQL statements are being applied. The database guard controls user access to tables in a logical standby database, and the ALTER SESSION DATABASE DISABLE GUARD
statement is used to bypass the database guard and allow modifications to the tables in the logical standby database.
By default, a logical standby database operates with the database guard set to ALL
, which is its most restrictive setting, and does not allow any user changes to be performed to the database. You can override the database guard to allow changes to the logical standby database by executing the ALTER SESSION DISABLE GUARD
statement. Privileged users can issue this statement to turn the database guard off for the current session.
The following sections provide some examples. The discussions in these sections assume that the database guard is set to ALL
or STANDBY
.
This section describes how to add a constraint to a table maintained through SQL Apply.
By default, only accounts with SYS
privileges can modify the database while the database guard is set to ALL
or STANDBY
. If you are logged in as SYSTEM
or another privileged account, you will not be able to issue DDL statements on the logical standby database without first bypassing the database guard for the session.
The following example shows how to stop SQL Apply, bypass the database guard, execute SQL statements on the logical standby database, and then reenable the guard:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered. SQL> ALTER SESSION DISABLE GUARD; PL/SQL procedure successfully completed. SQL> ALTER TABLE SCOTT.EMP ADD CONSTRAINT EMPID UNIQUE (EMPNO); Table altered. SQL> ALTER SESSION ENABLE GUARD; PL/SQL procedure successfully completed. SQL> ALTER DATABASE START LOGICAL STANDBY APPLY; Database altered.
Oracle recommends that you do not perform DML operations on tables maintained by SQL Apply while the database guard bypass is enabled. This will introduce deviations between the primary and standby databases that will make it impossible for the logical standby database to be maintained.
Sometimes, a reporting application must collect summary results and store them temporarily or track the number of times a report was run. Although the main purpose of the application is to perform reporting activities, the application might need to issue DML (insert, update, and delete) operations on a logical standby database. It might even need to create or drop tables.
You can set up the database guard to allow reporting operations to modify data as long as the data is not being maintained through SQL Apply. To do this, you must:
Specify the set of tables on the logical standby database to which an application can write data by executing the DBMS_LOGSTDBY.SKIP
procedure. Skipped tables are not maintained through SQL Apply.
Set the database guard to protect only standby tables.
In the following example, it is assumed that the tables to which the report is writing are also on the primary database.
The example stops SQL Apply, skips the tables, and then restarts SQL Apply so that changes can be applied to the logical standby database. The reporting application will be able to write to MYTABLES%
in MYSCHEMA
. They will no longer be maintained through SQL Apply.
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered. SQL> EXECUTE DBMS_LOGSTDBY.SKIP(stmt => 'SCHEMA_DDL',- schema_name => 'HR', - object_name => 'TESTEMP%'); PL/SQL procedure successfully completed. SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML','HR','TESTEMP%'); PL/SQL procedure successfully completed. SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered.
Once SQL Apply starts, it needs to update metadata on the standby database for the newly specified tables added in the skip rules. Attempts to modify the newly skipped table until SQL Apply has had a chance to update the metadata will fail. You can find out if SQL Apply has successfully taken into account the SKIP
rule you just added by issuing the following query:
SQL> SELECT VALUE FROM DBA_LOGSDTBY_PARAMETERS WHERE NAME = 'GUARD_STANDBY'; VALUE --------------- Ready
Once the VALUE
column displays "Ready
" SQL Apply has successfully updated all relevant metadata for the skipped table, and it is safe to modify the table.
Typically, you use table instantiation to re-create a table after an unrecoverable operation. You can also use the procedure to enable SQL Apply on a table that was formerly skipped.
Before you can create a table, it must meet the requirements described in Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified". Then, you can use the following steps to re-create a table named HR.EMPLOYEES
and resume SQL Apply. The directions assume that there is already a database link BOSTON
defined to access the primary database.
The following list shows how to re-create a table and restart SQL Apply on that table:
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Ensure no operations are being skipped for the table in question by querying the DBA_LOGSTDBY_SKIP
view:
SQL> SELECT * FROM DBA_LOGSTDBY_SKIP; ERROR STATEMENT_OPT OWNER NAME PROC ----- ------------------- ------------- ---------------- ----- N SCHEMA_DDL HR EMPLOYEES N DML HR EMPLOYEES N SCHEMA_DDL OE TEST_ORDER N DML OE TEST_ORDER
Because you already have skip rules associated with the table that you want to re-create on the logical standby database, you must first delete those rules. You can accomplish that by calling the DBMS_LOGSTDBY.UNSKIP
procedure. For example:
SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'DML', - schema_name => 'HR', - object_name => 'EMPLOYEES');SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'SCHEMA_DDL', - schema_name => 'HR', - object_name => 'EMPLOYEES');
Re-create the table HR.EMPLOYEES
with all its data in the logical standby database by using the DBMS_LOGSTDBY.INSTANTIATE_TABLE
procedure. For example:
SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE(shema_name => 'HR', - object-+_name => 'EMPLOYEES', - dblink => 'BOSTON');
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
See Also:
Oracle Database PL/SQL Packages and Types Reference for information about theDBMS_LOGSTDBY.UNSKIP
and the DBMS_LOGSTDBY.INSTANTIATE_TABLE
proceduresTo ensure a consistent view across the newly instantiated table and the rest of the database, wait for SQL Apply to catch up with the primary database before querying this table. You can do this by performing the following steps:
On the primary database, determine the current SCN by querying the V$DATABASE
view:
SQL> SELECT CURRENT_SCN FROM V$DATABASE@BOSTON; CURRENT_SCN --------------------- 345162788
Make sure SQL Apply has applied all transactions committed before the CURRENT_SCN
returned in the previous query:
SQL> SELECT APPLIED_SCN FROM V$LOGSTDBY_PROGRESS; APPLIED_SCN -------------------------- 345161345
When the APPLIED_SCN
returned in this query is greater than the CURRENT_SCN
returned in the first query, it is safe to query the newly re-created table.
This section contains the following topics:
Importing a Transportable Tablespace to the Primary Database
How Triggers and Constraints Are Handled on a Logical Standby Database
Perform the following steps to import a tablespace to the primary database.
Disable the guard setting so that you can modify the logical standby database:
SQL> ALTER SESSION DISABLE GUARD;
Import the tablespace at the logical standby database.
Enable the database guard setting (or disconnect from the session):
SQL> ALTER SESSION ENABLE GUARD;
Import the tablespace at the primary database.
SQL Apply does not support these DDL statements related to materialized views:
CREATE
, ALTER
, or DROP
MATERIALIZED VIEW
CREATE
, ALTER
, or DROP
MATERIALIZED VIEW LOG
Thus, new materialized views that have been created, altered, or dropped on the primary database after the logical standby database has been created are not reflected on the logical standby database. However, materialized views created on the primary database before the logical standby database has been created are also present on the logical standby database.
For materialized views that exist on both the primary and logical standby databases, an ON-COMMIT
materialized view is refreshed on the logical standby database when the transaction commit occurs.
An ON-DEMAND
materialized view is not automatically refreshed by SQL Apply. You must execute the DBMS_MVIEW.REFRESH
procedure to refresh it. For example, to refresh an ON-DEMAND
materialized view named HR.DEPARTMENTS_MV
on a logical standby database using the fast refresh method, issue the following command:
SQL> EXECUTE DBMS_MVIEW.REFRESH (- LIST => 'HR.DEPARTMENTS_MV', - METHOD => 'F');
Additional ON-COMMIT
materialized views created on the logical standby database are automatically maintained.
Additional ON-DEMAND
materialized views created on the logical standby database are not maintained by SQL Apply, and you must refresh these using the DBMS_MVIEW.REFRESH
procedure.
By default, triggers and constraints are automatically enabled and handled on logical standby databases.
For triggers and constraints on tables maintained by SQL Apply:
Constraints — Check constraints are evaluated on the primary database and do not need to be re-evaluated on the logical standby database
Triggers — The effects of the triggers executed on the primary database are logged and applied on the standby database
For triggers and constraints on tables not maintained by SQL Apply:
Constraints are evaluated
Triggers are fired
When a logical standby database receives a new branch of redo data, SQL Apply automatically takes the new branch of redo data. For logical standby databases, no manual intervention is required if the standby database did not apply redo data past the new resetlogs SCN (past the start of the new branch of redo data). The following table describes how to resynchronize the standby database with the primary database branch.
If the standby database. . . | Then. . . | Perform these steps. . . |
---|---|---|
Has not applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) | SQL Apply automatically takes the new branch of redo data. | No manual intervention is necessary. SQL Apply automatically resynchronizes the standby database with the new branch of redo data. |
Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is enabled on the standby database | The standby database is recovered in the future of the new branch of redo data. |
SQL Apply automatically resynchronizes the standby database with the new branch. |
Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is not enabled on the standby database | The primary database has diverged from the standby on the indicated primary database branch. | Re-create the logical standby database following the procedures in Chapter 4, "Creating a Logical Standby Database". |
Is missing intervening archived redo log files from the new branch of redo data | SQL Apply cannot continue until the missing log files are retrieved. | Locate and register missing archived redo log files from each branch. |
Is missing archived redo log files from the end of the previous branch of redo data | SQL Apply cannot continue until the missing log files are retrieved. | Locate and register missing archived redo log files from the previous branch. |
See Oracle Database Backup and Recovery Advanced User's Guide for more information about database incarnations, recovering through an OPEN RESETLOGS
operation, and Flashback Database.
This section contains the following topics:
On the primary database, if a table does not have a primary key or a unique index and you are certain the rows are unique, then create a primary key RELY
constraint. On the logical standby database, create an index on the columns that make up the primary key. The following query generates a list of tables with no index information that can be used by a logical standby database to apply to uniquely identify rows. By creating an index on the following tables, performance can be improved significantly.
SQL> SELECT OWNER, TABLE_NAME FROM DBA_TABLES 2> WHERE OWNER NOT IN('SYS','SYSTEM','OUTLN','DBSNMP') 3> MINUS 3> SELECT DISTINCT TABLE_OWNER, TABLE_NAME FROM DBA_INDEXES 4> WHERE INDEX_TYPE NOT LIKE ('FUNCTION-BASED%') 5> MINUS 6> SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED;
You can add a rely primary key constraint to a table on the primary database, as follows:
Add the primary key rely constraint at the primary database
SQL> ALTER TABLE HR.TEST_EMPLOYEES ADD PRIMARY KEY (EMPNO) RELY DISABLE; SQL> ALTER SESSION DISABLE GUARD;
This will ensure that the EMPNO
column, which can be used to uniquely identify the rows in HR.TEST_EMPLOYEES
table, will be supplementally logged as part of any updates done on that table.
Note that the HR.TEST_EMPLOYEES
table still does not have any unique index specified on the logical standby database. This may cause UPDATE
statements to do full table scans on the logical standby database. You can remedy that by adding a unique index on the EMPNO
column on the logical standby database.See Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified" and Oracle Database SQL Reference for more information about RELY
constraints.
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Disable the guard so that you can modify a maintained table on the logical standby database:
SQL> ALTER SESSION DISABLE GUARD;
Add a unique index on EMPNO
column:
SQL> CREATE UNIQUE INDEX UI_TEST_EMP ON HR.TEST_EMPLOYEES (EMPNO);
Enable the guard:
SQL> ALTER SESSION ENABLE GUARD;
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
Statistics should be gathered on the standby database because the cost-based optimizer (CBO) uses them to determine the optimal query execution path. New statistics should be gathered after the data or structure of a schema object is modified in ways that make the previous statistics inaccurate. For example, after inserting or deleting a significant number of rows into a table, collect new statistics on the number of rows.
Statistics should be gathered on the standby database because DML and DDL operations on the primary database are executed as a function of the workload. While the standby database is logically equivalent to the primary database, SQL Apply might execute the workload in a different way. This is why using the STATS pack on the logical standby database and the V$SYSSTAT
view can be useful in determining which tables are consuming the most resources and table scans.
See Also:
Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified" and Oracle Database SQL Reference for more information aboutRELY
constraintsThe following sections describe:
Perform the following steps to find out whether adjusting the number of APPLIER
processes will help you achieve greater throughput:
Determine if APPLIER
processes are busy by issuing the following query:
SQL> SELECT COUNT(*) AS IDLE_APPLIER FROM V$LOGSTDBY_PROCESS WHERE TYPE = 'APPLIER' and status_code = 16166; IDLE_APPLIER ------------------------- 0
Once you are sure there are no idle APPLIER
processes, issue the following query to ensure there is enough work available for additional APPLIER
processes if you choose to adjust the number of APPLIERS
:
SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE 'TRANSACTIONS%'; NAME VALUE --------------------- ------- transactions ready 27896 transactions applied 25671
These two statistics keep a cumulative total of transactions that are ready to be applied by the APPLIER
processes and the number of transactions that have already been applied.
If the number (transactions ready
- transactions applied
) is higher than twice the number of APPLIER
processes available, an improvement in throughput is possible if you increase the number of APPLIER
processes.
Note:
The number is a rough measure of ready work. The workload may be such that an interdependency between ready transactions will prevent additional availableAPPLIER
processes from applying them. For instance, if the majority of the transactions that are ready to be applied are DDL transactions, adding more APPLIER
processes will not result in a higher throughput.To adjust the number of APPLIER
processes to 20 from the default value of 5, perform the following steps:
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered
Set the number of APPLY_SERVERS
to 20:
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('APPLY_SERVERS', 20); PL/SQL procedure successfully completed
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered
In only rare cases do you need to adjust the number of PREPARER
processes. Before you decide to increase the number of PREPARER
processes, ensure the following conditions are true:
All PREPARER
processes are busy
The number of transactions ready to be applied is less than the number of APPLIER
processes available
There are idle APPLIER
processes
The following steps show how to determine these conditions are true:
Ensure all PREPARER
processes are busy:
SQL> SELECT COUNT(*) AS IDLE_PREPARER FROM V$LOGSTDBY_PROCESS WHERE TYPE = 'PREPARER' and status_code = 16166; IDLE_PREPARER ------------- 0
Ensure the number of transactions ready to be applied is less than the number of APPLIER
processes:
SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE 'transactions%'; NAME VALUE --------------------- -------transactions ready 27896 transactions applied 27892 SQL> SELECT COUNT(*) AS APPLIER_COUNT FROM V$LOGSTDBY_PROCESS WHERE TYPE = 'APPLIER'; APPLIER_COUNT ------------- 20
Note: Issue this query several times to ensure this is not a transient event.
Ensure there are idle APPLIER
processes:
SQL> SELECT COUNT(*) AS IDLE_APPLIER FROM V$LOGSTDBY_PROCESS WHERE TYPE = 'APPLIER' and status_code = 16166; IDLE_APPLIER ------------------------- 19
In the example, all conditions have been satisfied. Therefore, you can now increase the number of PREPARER
processes to 4 (from the default value of 1), by performing the following steps:
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered
Set the number of PREPARE_SERVERS
to 4:
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('PREPARE_SERVERS', 4); PL/SQL procedure successfully completed
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered
For some workloads, SQL Apply may use a large number of pageout operations, thereby reducing the overall throughput of the system. To find out whether increasing memory allocated to LCR cache will be beneficial, perform the following steps:
Issue the following query to obtain a snapshot of pageout activity:
SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE '%PAGE%' OR NAME LIKE '%UPTIME%' OR NAME LIKE '%idle%'; NAME VALUE -------------------------- --------------- coordinator uptime in secs 894856 bytes paged out 20000 seconds spent in pageout 2 system idle time in secs 1000
Issue the query again in 5 minutes:
SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME LIKE '%PAGE%' OR NAME LIKE '%UPTIME%' OR NAME LIKE '%idle%'; NAME VALUE -------------------------- --------------- coordinator uptime in secs 895156 bytes paged out 1020000 seconds spent in pageout 100 system idle time in secs 1000
Compute the normalized pageout activity. For example:
Change in coordinator uptime (C)= (895156 – 894856) = 300 secs Amount of additional idle time (I)= (1000 – 1000) = 0 Change in time spent in pageout (P) = (100 – 2) = 98 secs Pageout time in comparison to uptime = P/(C-I) = 98/300 ~ 32.67%
Ideally, the pageout activity should not consume more than 5 percent of the total uptime. If you continue to take snapshots over an extended interval and you find the pageout activities continue to consume a significant portion of the apply time, increasing the memory size may provide some benefits. You can increase the memory allocated to SQL Apply by performing the following steps:
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered
Set the memory allocated to LCR cache (for this example, the SGA is set to 1 GB):
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_SGA', 1024); PL/SQL procedure successfully completed
Because the MAX_SGA
is specified in megabytes (MB), increasing the memory to 1 GB is specified as 1024 (MB) in the example.
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered
By default transactions are applied on the logical standby database in the exact order in which they were committed on the primary database. The default order of committing transactions allow any reporting application to run transparently on the logical standby database. However, there are times (such as after a prolonged outage of the logical standby database due to hardware failure or upgrade) when you want the logical standby database to catch up with the primary database, and can tolerate not running the reporting applications for a while. In this case, you can change the default apply mode by performing the following steps:
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered
Issue the following to allow transactions to be applied out of order from how they were committed on the primary databases:
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('PRESERVE_COMMIT_ORDER', 'FALSE'); PL/SQL procedure successfully completed
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered
Once you have caught up with the primary database (verify this by querying the V$LOGSTDBY_STATS
view), and you are ready to open the logical standby database for reporting applications, you can change the apply mode as follows:
Stop SQL Apply:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered
Restore the default value for the PRESERVE_COMMIT_ORDER
parameter:
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_UNSET('PRESERVE_COMMIT_ORDER'); PL/SQL procedure successfully completed
Start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE; Database altered
For a typical online transaction processing (OLTP) workload, the nondefault mode can provide a 50 percent or better throughput improvement over the default apply mode.