Oracle® Database Administrator's Guide 10g Release 2 (10.2) Part Number B14231-02 |
|
|
View PDF |
This chapter describes the various aspects of datafile and tempfile management, and contains the following topics:
See Also:
Part III, "Automated File and Storage Management" for information about creating datafiles and tempfiles that are both created and managed by the Oracle Database serverDatafiles are physical files of the operating system that store the data of all logical structures in the database. They must be explicitly created for each tablespace.
Note:
Tempfiles are a special class of datafiles that are associated only with temporary tablespaces. Information in this chapter applies to both datafiles and tempfiles except where differences are noted. Tempfiles are further described in "Creating a Locally Managed Temporary Tablespace"Oracle Database assigns each datafile two associated file numbers, an absolute file number and a relative file number, that are used to uniquely identify it. These numbers are described in the following table:
Type of File Number | Description |
---|---|
Absolute | Uniquely identifies a datafile in the database. This file number can be used in many SQL statements that reference datafiles in place of using the file name. The absolute file number can be found in the FILE# column of the V$DATAFILE or V$TEMPFILE view, or in the FILE_ID column of the DBA_DATA_FILES or DBA_TEMP_FILES view. |
Relative | Uniquely identifies a datafile within a tablespace. For small and medium size databases, relative file numbers usually have the same value as the absolute file number. However, when the number of datafiles in a database exceeds a threshold (typically 1023), the relative file number differs from the absolute file number. In a bigfile tablespace, the relative file number is always 1024 (4096 on OS/390 platform). |
This section describes aspects of managing datafiles, and contains the following topics:
At least one datafile is required for the SYSTEM
and SYSAUX
tablespaces of a database. Your database should contain several other tablespaces with their associated datafiles or tempfiles. The number of datafiles that you anticipate creating for your database can affect the settings of initialization parameters and the specification of CREATE
DATABASE
statement clauses.
Be aware that your operating system might impose limits on the number of datafiles contained in your Oracle Database. Also consider that the number of datafiles, and how and where they are allocated can affect the performance of your database.
Note:
One means of controlling the number of datafiles in your database and simplifying their management is to use bigfile tablespaces. Bigfile tablespaces comprise a single, very large datafile and are especially useful in ultra large databases and where a logical volume manager is used for managing operating system files. Bigfile tablespaces are discussed in "Bigfile Tablespaces".Consider the following guidelines when determining the number of datafiles for your database.
When starting an Oracle Database instance, the DB_FILES
initialization parameter indicates the amount of SGA space to reserve for datafile information and thus, the maximum number of datafiles that can be created for the instance. This limit applies for the life of the instance. You can change the value of DB_FILES
(by changing the initialization parameter setting), but the new value does not take effect until you shut down and restart the instance.
When determining a value for DB_FILES
, take the following into consideration:
If the value of DB_FILES
is too low, you cannot add datafiles beyond the DB_FILES
limit without first shutting down the database.
If the value of DB_FILES
is too high, memory is unnecessarily consumed.
You can add datafiles to traditional smallfile tablespaces, subject to the following limitations:
Operating systems often impose a limit on the number of files a process can open simultaneously. More datafiles cannot be created when the operating system limit of open files is reached.
Operating systems impose limits on the number and size of datafiles.
The database imposes a maximum limit on the number of datafiles for any Oracle Database opened by any instance. This limit is operating system specific.
You cannot exceed the number of datafiles specified by the DB_FILES
initialization parameter.
When you issue CREATE DATABASE
or CREATE CONTROLFILE
statements, the MAXDATAFILES
parameter specifies an initial size of the datafile portion of the control file. However, if you attempt to add a new file whose number is greater than MAXDATAFILES
, but less than or equal to DB_FILES
, the control file will expand automatically so that the datafiles section can accommodate more files.
The number of datafiles contained in a tablespace, and ultimately the database, can have an impact upon performance.
Oracle Database allows more datafiles in the database than the operating system defined limit. The database DBWn processes can open all online datafiles. Oracle Database is capable of treating open file descriptors as a cache, automatically closing files when the number of open file descriptors reaches the operating system-defined limit. This can have a negative performance impact. When possible, adjust the operating system limit on open file descriptors so that it is larger than the number of online datafiles in the database.
See Also:
Your operating system specific Oracle documentation for more information on operating system limits
Oracle Database SQL Reference for more information about the MAXDATAFILES
parameter of the CREATE DATABASE
or CREATE CONTROLFILE
statement
When creating a tablespace, you should estimate the potential size of database objects and create sufficient datafiles. Later, if needed, you can create additional datafiles and add them to a tablespace to increase the total amount of disk space allocated to it, and consequently the database. Preferably, place datafiles on multiple devices to ensure that data is spread evenly across all devices.
Tablespace location is determined by the physical location of the datafiles that constitute that tablespace. Use the hardware resources of your computer appropriately.
For example, if several disk drives are available to store the database, consider placing potentially contending datafiles on separate disks.This way, when users query information, both disk drives can work simultaneously, retrieving data at the same time.
See Also:
Oracle Database Performance Tuning Guide for information about I/O and the placement of datafilesDatafiles should not be stored on the same disk drive that stores the database redo log files. If the datafiles and redo log files are stored on the same disk drive and that disk drive fails, the files cannot be used in your database recovery procedures.
If you multiplex your redo log files, then the likelihood of losing all of your redo log files is low, so you can store datafiles on the same drive as some redo log files.
You can create datafiles and associate them with a tablespace using any of the statements listed in the following table. In all cases, you can either specify the file specifications for the datafiles being created, or you can use the Oracle-managed files feature to create files that are created and managed by the database server. The table includes a brief description of the statement, as used to create datafiles, and references the section of this book where use of the statement is specifically described:
SQL Statement | Description | Additional Information |
---|---|---|
CREATE TABLESPACE |
Creates a tablespace and the datafiles that comprise it | "Creating Tablespaces" |
CREATE TEMPORARY TABLESPACE |
Creates a locally-managed temporary tablespace and the tempfiles (tempfiles are a special kind of datafile) that comprise it | "Creating a Locally Managed Temporary Tablespace" |
ALTER TABLESPACE ... ADD DATAFILE |
Creates and adds a datafile to a tablespace | "Altering a Locally Managed Temporary Tablespace" |
ALTER TABLESPACE ... ADD TEMPFILE |
Creates and adds a tempfile to a temporary tablespace | "Creating a Locally Managed Temporary Tablespace" |
CREATE DATABASE |
Creates a database and associated datafiles | "Manually Creating an Oracle Database" |
ALTER DATABASE ... CREATE DATAFILE |
Creates a new empty datafile in place of an old one--useful to re-create a datafile that was lost with no backup. | See Oracle Database Backup and Recovery Advanced User's Guide. |
If you add new datafiles to a tablespace and do not fully specify the filenames, the database creates the datafiles in the default database directory or the current directory, depending upon your operating system. Oracle recommends you always specify a fully qualified name for a datafile. Unless you want to reuse existing files, make sure the new filenames do not conflict with other files. Old files that have been previously dropped will be overwritten.
If a statement that creates a datafile fails, the database removes any created operating system files. However, because of the large number of potential errors that can occur with file systems and storage subsystems, there can be situations where you must manually remove the files using operating system commands.
This section describes the various ways to alter the size of a datafile, and contains the following topics:
You can create datafiles or alter existing datafiles so that they automatically increase in size when more space is needed in the database. The file size increases in specified increments up to a specified maximum.
Setting your datafiles to extend automatically provides these advantages:
Reduces the need for immediate intervention when a tablespace runs out of space
Ensures applications will not halt or be suspended because of failures to allocate extents
To determine whether a datafile is auto-extensible, query the DBA_DATA_FILES
view and examine the AUTOEXTENSIBLE
column.
You can specify automatic file extension by specifying an AUTOEXTEND ON
clause when you create datafiles using the following SQL statements:
CREATE
DATABASE
ALTER DATABASE
CREATE
TABLESPACE
ALTER
TABLESPACE
You can enable or disable automatic file extension for existing datafiles, or manually resize a datafile, using the ALTER
DATABASE
statement. For a bigfile tablespace, you are able to perform these operations using the ALTER
TABLESPACE
statement.
The following example enables automatic extension for a datafile added to the users
tablespace:
ALTER TABLESPACE users ADD DATAFILE '/u02/oracle/rbdb1/users03.dbf' SIZE 10M AUTOEXTEND ON NEXT 512K MAXSIZE 250M;
The value of NEXT
is the minimum size of the increments added to the file when it extends. The value of MAXSIZE
is the maximum size to which the file can automatically extend.
The next example disables the automatic extension for the datafile.
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' AUTOEXTEND OFF;
See Also:
Oracle Database SQL Reference for more information about the SQL statements for creating or altering datafilesYou can manually increase or decrease the size of a datafile using the ALTER DATABASE
statement. This enables you to add more space to your database without adding more datafiles. This is beneficial if you are concerned about reaching the maximum number of datafiles allowed in your database.
For a bigfile tablespace you can use the ALTER
TABLESPACE
statement to resize a datafile. You are not allowed to add a datafile to a bigfile tablespace.
Manually reducing the sizes of datafiles enables you to reclaim unused space in the database. This is useful for correcting errors in estimates of space requirements.
In the next example, assume that the datafile /u02/oracle/rbdb1/stuff01.dbf
has extended up to 250M. However, because its tablespace now stores smaller objects, the datafile can be reduced in size.
The following statement decreases the size of datafile /u02/oracle/rbdb1/stuff01.dbf
:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' RESIZE 100M;
Note:
It is not always possible to decrease the size of a file to a specific value. It could be that the file contains data beyond the specified decreased size, in which case the database will return an error.You can alter the availability of individual datafiles or tempfiles by taking them offline or bringing them online. Offline datafiles are unavailable to the database and cannot be accessed until they are brought back online.
Reasons for altering datafile availability include the following:
You want to perform an offline backup of a datafile.
You want to rename or relocate a datafile. You must first take it offline or take the tablespace offline.
The database has problems writing to a datafile and automatically takes the datafile offline. Later, after resolving the problem, you can bring the datafile back online manually.
A datafile becomes missing or corrupted. You must take it offline before you can open the database.
The datafiles of a read-only tablespace can be taken offline or brought online, but bringing a file online does not affect the read-only status of the tablespace. You cannot write to the datafile until the tablespace is returned to the read/write state.
Note:
You can make all datafiles of a tablespace temporarily unavailable by taking the tablespace itself offline. You must leave these files in the tablespace to bring the tablespace back online, although you can relocate or rename them following procedures similar to those shown in "Renaming and Relocating Datafiles".For more information, see "Taking Tablespaces Offline".
To take a datafile offline or bring it online, you must have the ALTER DATABASE
system privilege. To take all datafiles or tempfiles offline using the ALTER TABLESPACE
statement, you must have the ALTER TABLESPACE
or MANAGE TABLESPACE
system privilege. In an Oracle Real Application Clusters environment, the database must be open in exclusive mode.
This section describes ways to alter datafile availability, and contains the following topics:
Bringing Datafiles Online or Taking Offline in ARCHIVELOG Mode
Altering the Availability of All Datafiles or Tempfiles in a Tablespace
To bring an individual datafile online, issue the ALTER DATABASE
statement and include the DATAFILE
clause.The following statement brings the specified datafile online:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' ONLINE;
To take the same file offline, issue the following statement:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' OFFLINE;
Note:
To use this form of theALTER DATABASE
statement, the database must be in ARCHIVELOG
mode. This requirement prevents you from accidentally losing the datafile, since taking the datafile offline while in NOARCHIVELOG
mode is likely to result in losing the file.To take a datafile offline when the database is in NOARCHIVELOG
mode, use the ALTER
DATABASE
statement with both the DATAFILE
and OFFLINE
FOR
DROP
clauses.
The OFFLINE
keyword causes the database to mark the datafile OFFLINE
, whether or not it is corrupted, so that you can open the database.
The FOR
DROP
keywords mark the datafile for subsequent dropping. Such a datafile can no longer be brought back online.
Note:
This operation does not actually drop the datafile. It remains in the data dictionary, and you must drop it yourself using one of the following methods:An ALTER
TABLESPACE
... DROP
DATAFILE
statement.
After an OFFLINE
FOR
DROP
, this method works for dictionary managed tablespaces only.
A DROP
TABLESPACE
... INCLUDING
CONTENTS
AND
DATAFILES
statement
If the preceding methods fail, an operating system command to delete the datafile. This is the least desirable method, as it leaves references to the datafile in the data dictionary and control files.
The following statement takes the specified datafile offline and marks it to be dropped:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' OFFLINE FOR DROP;
Clauses of the ALTER TABLESPACE
statement allow you to change the online or offline status of all of the datafiles or tempfiles within a tablespace. Specifically, the statements that affect online/offline status are:
ALTER
TABLESPACE
... DATAFILE
{ONLINE
|OFFLINE
}
ALTER
TABLESPACE
... TEMPFILE
{ONLINE
|OFFLINE
}
You are required only to enter the tablespace name, not the individual datafiles or tempfiles. All of the datafiles or tempfiles are affected, but the online/offline status of the tablespace itself is not changed.
In most cases the preceding ALTER TABLESPACE
statements can be issued whenever the database is mounted, even if it is not open. However, the database must not be open if the tablespace is the SYSTEM
tablespace, an undo tablespace, or the default temporary tablespace. The ALTER DATABASE DATAFILE
and ALTER DATABASE TEMPFILE
statements also have ONLINE/OFFLINE
clauses, however in those statements you must enter all of the filenames for the tablespace.
The syntax is different from the ALTER TABLESPACE...ONLINE|OFFLINE
statement that alters tablespace availability, because that is a different operation. The ALTER TABLESPACE
statement takes datafiles offline as well as the tablespace, but it cannot be used to alter the status of a temporary tablespace or its tempfile(s).
You can rename datafiles to either change their names or relocate them. Some possible procedures for doing this are described in the following sections:
Procedures for Renaming and Relocating Datafiles in a Single Tablespace
Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces
When you rename and relocate datafiles with these procedures, only the pointers to the datafiles, as recorded in the database control file, are changed. The procedures do not physically rename any operating system files, nor do they copy files at the operating system level. Renaming and relocating datafiles involves several steps. Read the steps and examples carefully before performing these procedures.
The section suggests some procedures for renaming and relocating datafiles that can be used for a single tablespace. You must have ALTER TABLESPACE
system privileges.
See Also:
"Taking Tablespaces Offline" for more information about taking tablespaces offline in preparation for renaming or relocating datafilesTo rename datafiles in a single tablespace, complete the following steps:
Take the tablespace that contains the datafiles offline. The database must be open.
For example:
ALTER TABLESPACE users OFFLINE NORMAL;
Rename the datafiles using the operating system.
Use the ALTER TABLESPACE
statement with the RENAME DATAFILE
clause to change the filenames within the database.
For example, the following statement renames the datafiles /u02/oracle/rbdb1/user1.dbf
and /u02/oracle/rbdb1/user2.dbf
to/u02/oracle/rbdb1/users01.dbf
and /u02/oracle/rbdb1/users02.dbf
, respectively:
ALTER TABLESPACE users RENAME DATAFILE '/u02/oracle/rbdb1/user1.dbf', '/u02/oracle/rbdb1/user2.dbf' TO '/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf';
Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile name exactly as it appears in the DBA_DATA_FILES
view of the data dictionary.
Back up the database. After making any structural changes to a database, always perform an immediate and complete backup.
Here is a sample procedure for relocating a datafile.
Assume the following conditions:
An open database has a tablespace named users
that is made up of datafiles all located on the same disk.
The datafiles of the users
tablespace are to be relocated to different and separate disk drives.
You are currently connected with administrator privileges to the open database.
You have a current backup of the database.
Complete the following steps:
If you do not know the specific file names or sizes, you can obtain this information by issuing the following query of the data dictionary view DBA_DATA_FILES
:
SQL> SELECT FILE_NAME, BYTES FROM DBA_DATA_FILES 2> WHERE TABLESPACE_NAME = 'USERS'; FILE_NAME BYTES ------------------------------------------ ---------------- /u02/oracle/rbdb1/users01.dbf 102400000 /u02/oracle/rbdb1/users02.dbf 102400000
Take the tablespace containing the datafiles offline:
ALTER TABLESPACE users OFFLINE NORMAL;
Copy the datafiles to their new locations and rename them using the operating system. You can copy the files using the DBMS_FILE_TRANSFER
package discussed in "Copying Files Using the Database Server".
Note:
You can temporarily exit SQL*Plus to execute an operating system command to copy a file by using the SQL*PlusHOST
command.Rename the datafiles within the database.
The datafile pointers for the files that make up the users
tablespace, recorded in the control file of the associated database, must now be changed from the old names to the new names.
Use the ALTER TABLESPACE...RENAME DATAFILE
statement.
ALTER TABLESPACE users RENAME DATAFILE '/u02/oracle/rbdb1/users01.dbf', '/u02/oracle/rbdb1/users02.dbf' TO '/u03/oracle/rbdb1/users01.dbf', '/u04/oracle/rbdb1/users02.dbf';
Back up the database. After making any structural changes to a database, always perform an immediate and complete backup.
You can rename and relocate datafiles in one or more tablespaces using the ALTER DATABASE RENAME FILE
statement. This method is the only choice if you want to rename or relocate datafiles of several tablespaces in one operation. You must have the ALTER DATABASE
system privilege.
Note:
To rename or relocate datafiles of theSYSTEM
tablespace, the default temporary tablespace, or the active undo tablespace you must use this ALTER
DATABASE
method because you cannot take these tablespaces offline.To rename datafiles in multiple tablespaces, follow these steps.
Ensure that the database is mounted but closed.
Note:
Optionally, the database does not have to be closed, but the datafiles (or tempfiles) must be offline.Copy the datafiles to be renamed to their new locations and new names, using the operating system. You can copy the files using the DBMS_FILE_TRANSFER
package discussed in "Copying Files Using the Database Server".
Use ALTER DATABASE
to rename the file pointers in the database control file.
For example, the following statement renames the datafiles/u02/oracle/rbdb1/sort01.dbf
and /u02/oracle/rbdb1/user3.dbf
to /u02/oracle/rbdb1/temp01.dbf
and /u02/oracle/rbdb1/users03.dbf
, respectively:
ALTER DATABASE RENAME FILE '/u02/oracle/rbdb1/sort01.dbf', '/u02/oracle/rbdb1/user3.dbf' TO '/u02/oracle/rbdb1/temp01.dbf', '/u02/oracle/rbdb1/users03.dbf;
Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile names exactly as they appear in the DBA_DATA_FILES
view.
Back up the database. After making any structural changes to a database, always perform an immediate and complete backup.
You use the DROP
DATAFILE
and DROP
TEMPFILE
clauses of the ALTER
TABLESPACE
command to drop a single datafile or tempfile. The datafile must be empty. (A datafile is considered to be empty when no extents remain allocated from it.) When you drop a datafile or tempfile, references to the datafile or tempfile are removed from the data dictionary and control files, and the physical file is deleted from the file system or Automatic Storage Management (ASM) disk group.
The following example drops the datafile identified by the alias example_df3.f
in the ASM disk group DGROUP1
. The datafile belongs to the example
tablespace.
ALTER TABLESPACE example DROP DATAFILE '+DGROUP1/example_df3.f';
The next example drops the tempfile lmtemp02.dbf
, which belongs to the lmtemp
tablespace.
ALTER TABLESPACE lmtemp DROP TEMPFILE '/u02/oracle/data/lmtemp02.dbf';
This is equivalent to the following statement:
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES;
See Oracle Database SQL Reference for ALTER
TABLESPACE
syntax details.
Restrictions for Dropping Datafiles
The following are restrictions for dropping datafiles and tempfiles:
The database must be open.
If a datafile is not empty, it cannot be dropped.
If you must remove a datafile that is not empty and that cannot be made empty by dropping schema objects, you must drop the tablespace that contains the datafile.
You cannot drop the first or only datafile in a tablespace.
This means that DROP
DATAFILE
cannot be used with a bigfile tablespace.
You cannot drop datafiles in a read-only tablespace.
You cannot drop datafiles in the SYSTEM
tablespace.
If a datafile in a locally managed tablespace is offline, it cannot be dropped.
See Also:
Dropping TablespacesIf you want to configure the database to use checksums to verify data blocks, set the initialization parameter DB_BLOCK_CHECKSUM
to TRUE
. This causes the DBWn process and the direct loader to calculate a checksum for each block and to store the checksum in the block header when writing the block to disk.
The checksum is verified when the block is read, but only if DB_BLOCK_CHECKSUM
is TRUE
and the last write of the block stored a checksum. If corruption is detected, the database returns message ORA-01578
and writes information about the corruption to the alert log.
The default value of DB_BLOCK_CHECKSUM
is TRUE
. The value of this parameter can be changed dynamically using the ALTER SYSTEM
statement. Regardless of the setting of this parameter, checksums are always used to verify data blocks in the SYSTEM
tablespace.
See Also:
Oracle Database Reference for more information about theDB_BLOCK_CHECKSUM
initialization parameterYou do not necessarily have to use the operating system to copy a file within a database, or transfer a file between databases as you would do when using the transportable tablespace feature. You can use the DBMS_FILE_TRANSFER
package, or you can use Streams propagation. Using Streams is not discussed in this book, but an example of using the DBMS_FILE_TRANSFER
package is shown in "Copying a File on a Local File System".
The DBMS_FILE_TRANSFER
package can use a local file system or an Automatic Storage Management (ASM) disk group as the source or destination for a file transfer. Only Oracle database files (datafiles, tempfiles, controlfiles, and so on) can be involved in transfers to and from ASM.
Caution:
Do not use theDBMS_FILE_TRANSFER
package to copy or transfer a file that is being modified by a database because doing so may result in an inconsistent file.On UNIX systems, the owner of a file created by the DBMS_FILE_TRANSFER
package is the owner of the shadow process running the instance. Normally, this owner is ORACLE
. A file created using DBMS_FILE_TRANSFER
is always writable and readable by all processes in the database, but non privileged users who need to read or write such a file directly may need access from a system administrator.
This section contains the following topics:
Advanced File Transfer Mechanisms
See Also:
Oracle Database PL/SQL Packages and Types Reference for a description of the DBMS_FILE_TRANSFER
package.
This section includes an example that uses the COPY_FILE
procedure in the DBMS_FILE_TRANSFER
package to copy a file on a local file system. The following example copies a binary file named db1.dat
from the /usr/admin/source
directory to the /usr/admin/destination
directory as db1_copy.dat
on a local file system:
In SQL*Plus, connect as an administrative user who can grant privileges and create directory objects using SQL.
Use the SQL command CREATE DIRECTORY
to create a directory object for the directory from which you want to copy the file. A directory object is similar to an alias for the directory. For example, to create a directory object called SOURCE_DIR
for the /usr/admin/source
directory on your computer system, execute the following statement:
CREATE DIRECTORY SOURCE_DIR AS '/usr/admin/source';
Use the SQL command CREATE
DIRECTORY
to create a directory object for the directory into which you want to copy the binary file. For example, to create a directory object called DEST_DIR
for the /usr/admin/destination
directory on your computer system, execute the following statement:
CREATE DIRECTORY DEST_DIR AS '/usr/admin/destination';
Grant the required privileges to the user who will run the COPY_FILE
procedure. In this example, the strmadmin
user runs the procedure.
GRANT EXECUTE ON DBMS_FILE_TRANSFER TO strmadmin; GRANT READ ON DIRECTORY source_dir TO strmadmin; GRANT WRITE ON DIRECTORY dest_dir TO strmadmin;
Connect as strmadmin
user:
CONNECT strmadmin/strmadminpw
Run the COPY_FILE
procedure to copy the file:
BEGIN DBMS_FILE_TRANSFER.COPY_FILE( source_directory_object => 'SOURCE_DIR', source_file_name => 'db1.dat', destination_directory_object => 'DEST_DIR', destination_file_name => 'db1_copy.dat'); END; /
Caution:
Do not use the DBMS_FILE_TRANSFER package to copy or transfer a file that is being modified by a database because doing so may result in an inconsistent file.Although the procedures in the DBMS_FILE_TRANSFER
package typically are invoked as local procedure calls, they can also be invoked as remote procedure calls. A remote procedure call lets you copy a file within a database even when you are connected to a different database. For example, you can make a copy of a file on database DB
, even if you are connected to another database, by executing the following remote procedure call:
DBMS_FILE_TRANSFER.COPY_FILE@DB(...)
Using remote procedure calls enables you to copy a file between two databases, even if you are not connected to either database. For example, you can connect to database A
and then transfer a file from database B
to database C
. In this example, database A
is the third party because it is neither the source of nor the destination for the transferred file.
A third-party file transfer can both push and pull a file. Continuing with the previous example, you can perform a third-party file transfer if you have a database link from A
to either B
or C
, and that database has a database link to the other database. Database A
does not need a database link to both B
and C
.
For example, if you have a database link from A
to B
, and another database link from B
to C
, then you can run the following procedure at A
to transfer a file from B
to C
:
DBMS_FILE_TRANSFER.PUT_FILE@B(...)
This configuration pushes the file.
Alternatively, if you have a database link from A
to C
, and another database link from C
to B
, then you can run the following procedure at database A
to transfer a file from B
to C
:
DBMS_FILE_TRANSFER.GET_FILE@C(...)
This configuration pulls the file.
You can use the DBMS_SCHEDULER
package to transfer files automatically within a single database and between databases. Third-party file transfers are also supported by the DBMS_SCHEDULER
package. You can monitor a long-running file transfer done by the Scheduler using the V$SESSION_LONGOPS
dynamic performance view at the databases reading or writing the file. Any database links used by a Scheduler job must be fixed user database links.
You can use a restartable Scheduler job to improve the reliability of file transfers automatically, especially if there are intermittent failures. If a file transfer fails before the destination file is closed, then you can restart the file transfer from the beginning once the database has removed any partially written destination file. Hence you should consider using a restartable Scheduler job to transfer a file if the rest of the job is restartable. See Chapter 27, "Using the Scheduler" for more information on Scheduler jobs.
Note:
If a single restartable job transfers several files, then you should consider restart scenarios in which some of the files have been transferred already and some have not been transferred yet.You can create more sophisticated file transfer mechanisms using both the DBMS_FILE_TRANSFER
package and the DBMS_SCHEDULER
package. For example, when several databases have a copy of the file you want to transfer, you can consider factors such as source availability, source load, and communication bandwidth to the destination database when deciding which source database to contact first and which source databases to try if failures occur. In this case, the information about these factors must be available to you, and you must create the mechanism that considers these factors.
As another example, when early completion time is more important than load, you can submit a number of Scheduler jobs to transfer files in parallel. As a final example, knowing something about file layout on the source and destination databases enables you to minimize disk contention by performing or scheduling simultaneous transfers only if they use different I/O devices.
In an environment where datafiles are simply file system files or are created directly on a raw device, it is relatively straight forward to see the association between a tablespace and the underlying device. Oracle Database provides views, such as DBA_TABLESPACES
, DBA_DATA_FILES
, and V$DATAFILE
, that provide a mapping of files onto devices. These mappings, along with device statistics can be used to evaluate I/O performance.
However, with the introduction of host based Logical Volume Managers (LVM), and sophisticated storage subsystems that provide RAID (Redundant Array of Inexpensive Disks) features, it is not easy to determine file to device mapping. This poses a problem because it becomes difficult to determine your "hottest" files when they are hidden behind a "black box". This section presents the Oracle Database approach to resolving this problem.
The following topics are contained in this section:
Note:
This section presents an overview of the Oracle Database file mapping interface and explains how to use theDBMS_STORAGE_MAP
package and dynamic performance views to expose the mapping of files onto physical devices. You can more easily access this functionality through the Oracle Enterprise Manager (EM). It provides an easy to use graphical interface for mapping files to physical devices.To acquire an understanding of I/O performance, one must have detailed knowledge of the storage hierarchy in which files reside. Oracle Database provides a mechanism to show a complete mapping of a file to intermediate layers of logical volumes to actual physical devices. This is accomplished though a set of dynamic performance views (V$
views). Using these views, you can locate the exact disk on which any block of a file resides.
To build these views, storage vendors must provide mapping libraries that are responsible for mapping their particular I/O stack elements. The database communicates with these libraries through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information. Oracle provides a PL/SQL package, DBMS_STORAGE_MAP
, that you use to invoke mapping operations that populate the mapping views.
Note:
The file mapping interface is not available on Windows platforms.This section describes the components of the Oracle Database file mapping interface and how the interface works. It contains the following topics:
The following figure shows the components of the file mapping mechanism.
The following sections briefly describes these components and how they work together to populate the mapping views:
FMON is a background process started by the database whenever the FILE_MAPPING
initialization parameter is set to TRUE
. FMON is responsible for:
Building mapping information, which is stored in the SGA. This information is composed of the following structures:
Files
File system extents
Elements
Subelements
These structures are explained in "Mapping Structures".
Refreshing mapping information when a change occurs because of:
Changes to datafiles (size)
Addition or deletion of datafiles
Changes to the storage configuration (not frequent)
Saving mapping information in the data dictionary to maintain a view of the information that is persistent across startup and shutdown operations
Restoring mapping information into the SGA at instance startup. This avoids the need for a potentially expensive complete rebuild of the mapping information on every instance startup.
You help control this mapping using procedures that are invoked with the DBMS_STORAGE_MAP
package.
FMON spawns an external non-Oracle Database process called FMPUTL
, that communicates directly with the vendor supplied mapping libraries. This process obtains the mapping information through all levels of the I/O stack, assuming that mapping libraries exist for all levels. On some platforms the external process requires that the SETUID
bit is set to ON
because root privileges are needed to map through all levels of the I/O mapping stack.
The external process is responsible for discovering the mapping libraries and dynamically loading them into its address space.
Oracle Database uses mapping libraries to discover mapping information for the elements that are owned by a particular mapping library. Through these mapping libraries information about individual I/O stack elements is communicated. This information is used to populate dynamic performance views that can be queried by users.
Mapping libraries need to exist for all levels of the stack for the mapping to be complete, and different libraries may own their own parts of the I/O mapping stack. For example, a VERITAS VxVM library would own the stack elements related to the VERITAS Volume Manager, and an EMC library would own all EMC storage specific layers of the I/O mapping stack.
Mapping libraries are vendor supplied. However, Oracle currently supplies a mapping library for EMC storage. The mapping libraries available to a database server are identified in a special file named filemap.ora.
The mapping structures and the Oracle Database representation of these structures are described in this section. You will need to understand this information in order to interpret the information in the mapping views.
The following are the primary structures that compose the mapping information:
Files
A file mapping structure provides a set of attributes for a file, including file size, number of file system extents that the file is composed of, and the file type.
File system extents
A file system extent mapping structure describes a contiguous chunk of blocks residing on one element. This includes the device offset, the extent size, the file offset, the type (data or parity), and the name of the element where the extent resides.
Note:
File system extents are not the same as Oracle Database extents. File system extents are physical contiguous blocks of data written to a device as managed by the file system. Oracle Database extents are logical structures managed by the database, such as tablespace extents.Elements
An element mapping structure is the abstract mapping structure that describes a storage component within the I/O stack. Elements may be mirrors, stripes, partitions, RAID5, concatenated elements, and disks. These structures are the mapping building blocks.
Subelements
A subelement mapping structure describes the link between an element and the next elements in the I/O mapping stack. This structure contains the subelement number, size, the element name where the subelement exists, and the element offset.
All of these mapping structures are illustrated in the following example.
Consider an Oracle Database which is composed of two data files X and Y. Both files X and Y reside on a file system mounted on volume A. File X is composed of two extents while file Y is composed of only one extent.
The two extents of File X and the one extent of File Y both map to Element A. Element A is striped to Elements B and C. Element A maps to Elements B and C by way of Subelements B0 and C1, respectively.
Element B is a partition of Element D (a physical disk), and is mapped to Element D by way of subelement D0.
Element C is mirrored over Elements E and F (both physical disks), and is mirrored to those physical disks by way of Subelements E0 and F1, respectively.
All of the mapping structures are illustrated in Figure 9-2.
Figure 9-2 Illustration of Mapping Structures
Note that the mapping structures represented are sufficient to describe the entire mapping information for the Oracle Database instance and consequently to map every logical block within the file into a (element name, element offset) tuple (or more in case of mirroring) at each level within the I/O stack.
The configuration ID captures the version information associated with elements or files. The vendor library provides the configuration ID and updates it whenever a change occurs. Without a configuration ID, there is no way for the database to tell whether the mapping has changed.
There are two kinds of configuration IDs:
Persistent
These configuration IDs are persistent across instance shutdown
Non-persistent
The configuration IDs are not persistent across instance shutdown. The database is only capable of refreshing the mapping information while the instance is up.
This section discusses how to use the Oracle Database file mapping interface. It contains the following topics:
The following steps enable the file mapping feature:
Ensure that a valid filemap.ora file exists in the /opt/ORCLfmap/prot1_32/etc directory for 32-bit platforms, or in the /opt/ORCLfmap/prot1_64/etc directory for 64-bit platforms.
Caution:
While the format and content of the filemap.ora file is discussed here, it is for informational reasons only. The filemap.ora file is created by the database when your system is installed. Until such time that vendors supply there own libraries, there will be only one entry in the filemap.ora file, and that is the Oracle-supplied EMC library. This file should be modified manually by uncommenting this entry only if an EMC Symmetrix array is available.The filemap.ora file is the configuration file that describes all of the available mapping libraries. FMON requires that a filemap.ora file exists and that it points to a valid path to mapping libraries. Otherwise, it will not start successfully.
The following row needs to be included in filemap.ora for each library:
lib
=vendor_name:mapping_library_path
where:
vendor_name should be Oracle
for the EMC Symmetric library
mapping_library_path is the full path of the mapping library
Note that the ordering of the libraries in this file is extremely important. The libraries are queried based on their order in the configuration file.
The file mapping service can be even started even if no mapping libraries are available. The filemap.ora file still needs to be present even though it is empty. In this case, the mapping service is constrained in the sense that new mapping information cannot be discovered. Only restore and drop operations are allowed in such a configuration.
Set the FILE_MAPPING
initialization parameter to TRUE
.
The instance does not have to be shut down to set this parameter. You can set it using the following ALTER SYSTEM
statement:
ALTER SYSTEM SET FILE_MAPPING=TRUE;
Invoke the appropriate DBMS_STORAGE_MAP
mapping procedure. You have two options:
In a cold startup scenario, the Oracle Database is just started and no mapping operation has been invoked yet. You execute the DBMS_STORAGE_MAP.MAP_ALL
procedure to build the mapping information for the entire I/O subsystem associated with the database.
In a warm start scenario where the mapping information is already built, you have the option to invoke the DBMS_STORAGE_MAP.MAP_SAVE
procedure to save the mapping information in the data dictionary. (Note that this procedure is invoked in DBMS_STORAGE_MAP.MAP_ALL()
by default.) This forces all of the mapping information in the SGA to be flushed to disk.
Once you restart the database, use DBMS_STORAGE_MAP.RESTORE()
to restore the mapping information into the SGA. If needed, DBMS_STORAGE_MAP.MAP_ALL()
can be called to refresh the mapping information.
The DBMS_STORAGE_MAP
package enables you to control the mapping operations. The various procedures available to you are described in the following table.
Procedure | Use to: |
---|---|
MAP_OBJECT |
Build the mapping information for the database object identified by object name, owner, and type |
MAP_ELEMENT |
Build mapping information for the specified element |
MAP_FILE |
Build mapping information for the specified filename |
MAP_ALL |
Build entire mapping information for all types of database files (excluding archive logs) |
DROP_ELEMENT |
Drop the mapping information for a specified element |
DROP_FILE |
Drop the file mapping information for the specified filename |
DROP_ALL |
Drop all mapping information in the SGA for this instance |
SAVE |
Save into the data dictionary the required information needed to regenerate the entire mapping |
RESTORE |
Load the entire mapping information from the data dictionary into the shared memory of the instance |
LOCK_MAP |
Lock the mapping information in the SGA for this instance |
UNLOCK_MAP |
Unlock the mapping information in the SGA for this instance |
See Also:
Oracle Database PL/SQL Packages and Types Reference for a description of the DBMS_STORAGE_MAP
package
"File Mapping Examples" for an example of using the DBMS_STORAGE_MAP
package
Mapping information generated by DBMS_STORAGE_MAP
package is captured in dynamic performance views. Brief descriptions of these views are presented here.
View | Description |
---|---|
V$MAP_LIBRARY |
Contains a list of all mapping libraries that have been dynamically loaded by the external process |
V$MAP_FILE |
Contains a list of all file mapping structures in the shared memory of the instance |
V$MAP_FILE_EXTENT |
Contains a list of all file system extent mapping structures in the shared memory of the instance |
V$MAP_ELEMENT |
Contains a list of all element mapping structures in the SGA of the instance |
V$MAP_EXT_ELEMENT |
Contains supplementary information for all element mapping |
V$MAP_SUBELEMENT |
Contains a list of all subelement mapping structures in the shared memory of the instance |
V$MAP_COMP_LIST |
Contains supplementary information for all element mapping structures. |
V$MAP_FILE_IO_STACK |
The hierarchical arrangement of storage containers for the file displayed as a series of rows. Each row represents a level in the hierarchy. |
See Also:
Oracle Database Reference for a complete description of the dynamic performance viewsHowever, the information generated by the DBMS_STORAGE_MAP.MAP_OBJECT
procedure is captured in a global temporary table named MAP_OBJECT
. This table displays the hierarchical arrangement of storage containers for objects. Each row in the table represents a level in the hierarchy. A description of the MAP_OBJECT
table follows.
Column | Datatype | Description |
---|---|---|
OBJECT_NAME |
VARCHAR2(2000) |
Name of the object |
OBJECT_OWNER |
VARCHAR2(2000) |
Owner of the object |
OBJECT_TYPE |
VARCHAR2(2000) |
Object type |
FILE_MAP_IDX |
NUMBER |
File index (corresponds to FILE_MAP_IDX in V$MAP_FILE ) |
DEPTH |
NUMBER |
Element depth within the I/O stack |
ELEM_IDX |
NUMBER |
Index corresponding to element |
CU_SIZE |
NUMBER |
Contiguous set of logical blocks of the file, in HKB (half KB) units, that is resident contiguously on the element |
STRIDE |
NUMBER |
Number of HKB between contiguous units (CU) in the file that are contiguous on this element. Used in RAID5 and striped files. |
NUM_CU |
NUMBER |
Number of contiguous units that are adjacent to each other on this element that are separated by STRIDE HKB in the file. In RAID5, the number of contiguous units also include the parity stripes. |
ELEM_OFFSET |
NUMBER |
Element offset in HKB units |
FILE_OFFSET |
NUMBER |
Offset in HKB units from the start of the file to the first byte of the contiguous units |
DATA_TYPE |
VARCHAR2(2000) |
Datatype (DATA , PARITY , or DATA AND PARITY ) |
PARITY_POS |
NUMBER |
Position of the parity. Only for RAID5. This field is needed to distinguish the parity from the data part. |
PARITY_PERIOD |
NUMBER |
Parity period. Only for RAID5. |
The following examples illustrates some of the powerful capabilities of the Oracle Database file mapping feature. This includes:
The ability to map all the database files that span a particular device
The ability to map a particular file into its corresponding devices
The ability to map a particular database object, including its block distribution at all levels within the I/O stack
Consider an Oracle Database instance which is composed of two datafiles:
t_db1.f
t_db2.f
These files are created on a Solaris UFS file system mounted on a VERITAS VxVM host based striped volume, /dev/vx/dsk/ipfdg/ipf-vol1
, that consists of the following host devices as externalized from an EMC Symmetrix array:
/dev/vx/rdmp/c2t1d0s2
/dev/vx/rdmp/c2t1d1s2
Note that the following examples require the execution of a MAP_ALL()
operation.
The following query returns all Oracle Database files associated with the /dev/vx/rdmp/c2t1d1s2
host device:
SELECT UNIQUE me.ELEM_NAME, mf.FILE_NAME FROM V$MAP_FILE_IO_STACK fs, V$MAP_FILE mf, V$MAP_ELEMENT me WHERE mf.FILE_MAP_IDX = fs.FILE_MAP_IDX AND me.ELEM_IDX = fs.ELEM_IDX AND me.ELEM_NAME = '/dev/vx/rdmp/c2t1d1s2';
The query results are:
ELEM_NAME FILE_NAME ------------------------ -------------------------------- /dev/vx/rdmp/c2t1d1s2 /oracle/dbs/t_db1.f /dev/vx/rdmp/c2t1d1s2 /oracle/dbs/t_db2.f
The following query displays a topological graph of the /oracle/dbs/t_db1.f
datafile:
WITH fv AS (SELECT FILE_MAP_IDX, FILE_NAME FROM V$MAP_FILE WHERE FILE_NAME = '/oracle/dbs/t_db1.f') SELECT fv.FILE_NAME, LPAD(' ', 4 * (LEVEL - 1)) || el.ELEM_NAME ELEM_NAME FROM V$MAP_SUBELEMENT sb, V$MAP_ELEMENT el, fv, (SELECT UNIQUE ELEM_IDX FROM V$MAP_FILE_IO_STACK io, fv WHERE io.FILE_MAP_IDX = fv.FILE_MAP_IDX) fs WHERE el.ELEM_IDX = sb.CHILD_IDX AND fs.ELEM_IDX = el.ELEM_IDX START WITH sb.PARENT_IDX IN (SELECT DISTINCT ELEM_IDX FROM V$MAP_FILE_EXTENT fe, fv WHERE fv.FILE_MAP_IDX = fe.FILE_MAP_IDX) CONNECT BY PRIOR sb.CHILD_IDX = sb.PARENT_IDX;
The resulting topological graph is:
FILE_NAME ELEM_NAME ----------------------- ------------------------------------------------- /oracle/dbs/t_db1.f _sym_plex_/dev/vx/rdsk/ipfdg/ipf-vol1_-1_-1 /oracle/dbs/t_db1.f _sym_subdisk_/dev/vx/rdsk/ipfdg/ipf-vol1_0_0_0 /oracle/dbs/t_db1.f /dev/vx/rdmp/c2t1d0s2 /oracle/dbs/t_db1.f _sym_symdev_000183600407_00C /oracle/dbs/t_db1.f _sym_hyper_000183600407_00C_0 /oracle/dbs/t_db1.f _sym_hyper_000183600407_00C_1 /oracle/dbs/t_db1.f _sym_subdisk_/dev/vx/rdsk/ipfdg/ipf-vol1_0_1_0 /oracle/dbs/t_db1.f /dev/vx/rdmp/c2t1d1s2 /oracle/dbs/t_db1.f _sym_symdev_000183600407_00D /oracle/dbs/t_db1.f _sym_hyper_000183600407_00D_0 /oracle/dbs/t_db1.f _sym_hyper_000183600407_00D_1
This example displays the block distribution at all levels within the I/O stack for the scott.bonus
table.
A MAP_OBJECT()
operation must first be executed as follows:
EXECUTE DBMS_STORAGE_MAP.MAP_OBJECT('BONUS','SCOTT','TABLE');
The query is as follows:
SELECT io.OBJECT_NAME o_name, io.OBJECT_OWNER o_owner, io.OBJECT_TYPE o_type, mf.FILE_NAME, me.ELEM_NAME, io.DEPTH, (SUM(io.CU_SIZE * (io.NUM_CU - DECODE(io.PARITY_PERIOD, 0, 0, TRUNC(io.NUM_CU / io.PARITY_PERIOD)))) / 2) o_size FROM MAP_OBJECT io, V$MAP_ELEMENT me, V$MAP_FILE mf WHERE io.OBJECT_NAME = 'BONUS' AND io.OBJECT_OWNER = 'SCOTT' AND io.OBJECT_TYPE = 'TABLE' AND me.ELEM_IDX = io.ELEM_IDX AND mf.FILE_MAP_IDX = io.FILE_MAP_IDX GROUP BY io.ELEM_IDX, io.FILE_MAP_IDX, me.ELEM_NAME, mf.FILE_NAME, io.DEPTH, io.OBJECT_NAME, io.OBJECT_OWNER, io.OBJECT_TYPE ORDER BY io.DEPTH;
The following is the result of the query. Note that the o_size
column is expressed in KB.
O_NAME O_OWNER O_TYPE FILE_NAME ELEM_NAME DEPTH O_SIZE ------ ------- ------ ------------------- ----------------------------- ------ ------ BONUS SCOTT TABLE /oracle/dbs/t_db1.f /dev/vx/dsk/ipfdg/ipf-vol1 0 20 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_plex_/dev/vx/rdsk/ipf 1 20 pdg/if-vol1_-1_-1 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_subdisk_/dev/vx/rdsk/ 2 12 ipfdg/ipf-vol1_0_1_0 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_subdisk_/dev/vx/rdsk/ipf 2 8 dg/ipf-vol1_0_2_0 BONUS SCOTT TABLE /oracle/dbs/t_db1.f /dev/vx/rdmp/c2t1d1s2 3 12 BONUS SCOTT TABLE /oracle/dbs/t_db1.f /dev/vx/rdmp/c2t1d2s2 3 8 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_symdev_000183600407_00D 4 12 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_symdev_000183600407_00E 4 8 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_hyper_000183600407_00D_0 5 12 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_hyper_000183600407_00D_1 5 12 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_hyper_000183600407_00E_0 6 8 BONUS SCOTT TABLE /oracle/dbs/t_db1.f _sym_hyper_000183600407_00E_1 6 8
The following data dictionary views provide useful information about the datafiles of a database:
View | Description |
---|---|
DBA_DATA_FILES |
Provides descriptive information about each datafile, including the tablespace to which it belongs and the file ID. The file ID can be used to join with other views for detail information. |
DBA_EXTENTS
|
DBA view describes the extents comprising all segments in the database. Contains the file ID of the datafile containing the extent. USER view describes extents of the segments belonging to objects owned by the current user. |
DBA_FREE_SPACE
|
DBA view lists the free extents in all tablespaces. Includes the file ID of the datafile containing the extent. USER view lists the free extents in the tablespaces accessible to the current user. |
V$DATAFILE |
Contains datafile information from the control file |
V$DATAFILE_HEADER |
Contains information from datafile headers |
This example illustrates the use of one of these views, V$DATAFILE
.
SELECT NAME, FILE#, STATUS, CHECKPOINT_CHANGE# "CHECKPOINT" FROM V$DATAFILE; NAME FILE# STATUS CHECKPOINT -------------------------------- ----- ------- ---------- /u01/oracle/rbdb1/system01.dbf 1 SYSTEM 3839 /u02/oracle/rbdb1/temp01.dbf 2 ONLINE 3782 /u02/oracle/rbdb1/users03.dbf 3 OFFLINE 3782
FILE#
lists the file number of each datafile; the first datafile in the SYSTEM
tablespace created with the database is always file 1. STATUS
lists other information about a datafile. If a datafile is part of the SYSTEM
tablespace, its status is SYSTEM
(unless it requires recovery). If a datafile in a non-SYSTEM
tablespace is online, its status is ONLINE
. If a datafile in a non-SYSTEM
tablespace is offline, its status can be either OFFLINE
or RECOVER
. CHECKPOINT
lists the final SCN (system change number) written for the most recent checkpoint of a datafile.
See Also:
Oracle Database Reference for complete descriptions of these views