Skip Headers
Oracle® Database Utilities
10g Release 2 (10.2)

Part Number B14215-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

12 External Tables Concepts

The external tables feature is a complement to existing SQL*Loader functionality. It enables you to access data in external sources as if it were in a table in the database.

Prior to Oracle Database 10g, external tables were read-only. However, as of Oracle Database 10g, external tables can also be written to. Note that SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. See Behavior Differences Between SQL*Loader and External Tables for more information about how load behavior differs between SQL*Loader and external tables.

To use the external tables feature, you must have some knowledge of the file format and record format of the datafiles on your platform if the ORACLE_LOADER access driver is used and the datafiles are in text format. You must also know enough about SQL to be able to create an external table and perform queries against it.

This chapter discusses the following topics:

How Are External Tables Created?

External tables are created using the SQL CREATE TABLE...ORGANIZATION EXTERNAL statement. When you create an external table, you specify the following attributes:

The following example shows the use of each of these attributes:

SQL> CREATE TABLE emp_load
  2    (employee_number      CHAR(5),
  3     employee_dob         CHAR(20),
  4     employee_last_name   CHAR(20),
  5     employee_first_name  CHAR(15),
  6     employee_middle_name CHAR(15),
  7     employee_hire_date   DATE)
  8  ORGANIZATION EXTERNAL
  9    (TYPE ORACLE_LOADER
 10     DEFAULT DIRECTORY def_dir1
 11     ACCESS PARAMETERS
 12       (RECORDS DELIMITED BY NEWLINE
 13        FIELDS (employee_number      CHAR(2),
 14                employee_dob         CHAR(20),
 15                employee_last_name   CHAR(18),
 16                employee_first_name  CHAR(11),
 17                employee_middle_name CHAR(11),
 18                employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
 19               )
 20       )
 21     LOCATION ('info.dat')
 22    );
 
Table created.

The information you provide through the access driver ensures that data from the data source is processed so that it matches the definition of the external table. The fields listed after CREATE TABLE emp_load are actually defining the metadata for the data in the info.dat source file. The access parameters are optional.

Access Parameters

When you create an external table of a particular type, you can specify access parameters to modify the default behavior of the access driver. Each access driver has its own syntax for access parameters.

Location of Datafiles and Output Files

The access driver runs inside the database server. This is different from SQL*Loader, which is a client program that sends the data to be loaded over to the server. This difference has the following implications:

  • The server must have access to any files to be loaded by the access driver.

  • The server must create and write the output files created by the access driver: the log file, bad file, and discard file, as well as any dump files created by the ORACLE_DATAPUMP access driver.

The access driver does not allow you to specify a complete specification for files. This is because the server may have access to files that you do not, and allowing you to read this data would affect security. Similarly, you might overwrite a file that you normally would not have privileges to delete.

Instead, you are required to specify directory objects as the locations from which to read files and write files. A directory object maps a name to a directory name on the file system. For example, the following statement creates a directory object named ext_tab_dir that is mapped to a directory located at /usr/apps/datafiles.

CREATE DIRECTORY ext_tab_dir AS '/usr/apps/datafiles';

Directory objects can be created by DBAs or by any user with the CREATE ANY DIRECTORY privilege.

After a directory is created, the user creating the directory object needs to grant READ and WRITE privileges on the directory to other users. These privileges must be explicitly granted, rather than assigned through the use of roles. For example, to allow the server to read files on behalf of user scott in the directory named by ext_tab_dir, the user who created the directory object must execute the following command:

GRANT READ ON DIRECTORY ext_tab_dir TO scott;

The name of the directory object can appear in the following places in a CREATE TABLE...ORGANIZATION EXTERNAL statement:

  • The DEFAULT DIRECTORY clause, which specifies the default directory to use for all input and output files that do not explicitly name a directory object.

  • The LOCATION clause, which lists all of the datafiles for the external table. The files are named in the form directory:file. The directory portion is optional. If it is missing, the default directory is used as the directory for the file.

  • The ACCESS PARAMETERS clause where output files are named. The files are named in the form directory:file. The directory portion is optional. If it is missing, the default directory is used as the directory for the file. Syntax in the access parameters enables you to indicate that a particular output file should not be created. This is useful if you do not care about the output files or if you do not have write access to any directory objects.

The SYS user is the only user that can own directory objects, but the SYS user can grant other users the privilege to create directory objects. Note that READ or WRITE permission to a directory object means only that the Oracle database will read or write that file on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.

Example: Creating and Loading an External Table Using ORACLE_LOADER

The steps in this section show an example of using the ORACLE_LOADER access driver to create and load an external table. A traditional table named emp is defined along with an external table named emp_load. The external data is then loaded into an internal table.

  1. Assume your .dat file looks as follows:

    56november, 15, 1980  baker             mary       alice     09/01/2004
    87december, 20, 1970  roper             lisa       marie     01/01/1999
    
    
  2. Execute the following SQL statements to set up a default directory (which contains the data source) and to grant access to it:

    CREATE DIRECTORY ext_tab_dir AS '/usr/apps/datafiles';
    GRANT READ ON DIRECTORY ext_tab_dir TO SCOTT;
    
    
  3. Create a traditional table named emp:

    CREATE TABLE emp (emp_no CHAR(6), last_name CHAR(25), first_name CHAR(20), middle_initial CHAR(1), hire_date DATE, dob DATE);
    
    
  4. Create an external table named emp_load:

    SQL> CREATE TABLE emp_load
      2    (employee_number      CHAR(5),
      3     employee_dob         CHAR(20),
      4     employee_last_name   CHAR(20),
      5     employee_first_name  CHAR(15),
      6     employee_middle_name CHAR(15),
      7     employee_hire_date   DATE)
      8  ORGANIZATION EXTERNAL
      9    (TYPE ORACLE_LOADER
     10     DEFAULT DIRECTORY def_dir1
     11     ACCESS PARAMETERS
     12       (RECORDS DELIMITED BY NEWLINE
     13        FIELDS (employee_number      CHAR(2),
     14                employee_dob         CHAR(20),
     15                employee_last_name   CHAR(18),
     16                employee_first_name  CHAR(11),
     17                employee_middle_name CHAR(11),
     18                employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
     19               )
     20       )
     21     LOCATION ('info.dat')
     22    );
     
    Table created.
    
    
  5. Load the data from the external table emp_load into the table emp:

    SQL> INSERT INTO emp (emp_no,
      2                   first_name,
      3                   middle_initial,
      4                   last_name,
      5                   hire_date,
      6                   dob)
      7  (SELECT employee_number,
      8          employee_first_name,
      9          substr(employee_middle_name, 1, 1),
     10          employee_last_name,
     11          employee_hire_date,
     12          to_date(employee_dob,'month, dd, yyyy')
     13  FROM emp_load);
     
    2 rows created.
    
    
  6. Perform the following select operation to verify that the information in the .dat file was loaded into the emp table:

    SQL> SELECT * FROM emp;
     
    EMP_NO LAST_NAME                 FIRST_NAME           M HIRE_DATE DOB
    ------ ------------------------- -------------------- - --------- ---------
    56     baker                     mary                 a 01-SEP-04 15-NOV-80
    87     roper                     lisa                 m 01-JAN-99 20-DEC-70
     
    2 rows selected.
    
    

Notes about this example:

  • The employee_number field in the datafile is converted to a character string for the employee_number field in the external table.

  • The datafile contains an employee_dob field that is not loaded into any field in the table.

  • The substr function is used on the employee_middle_name column in the external table to generate the value for middle_initial in table emp.

  • The character string for employee_hire_date in info.dat is automatically converted into a DATE datatype at external table access time, using the format mask specified in the external table definiition.

  • Unlike employee_hire_date, the DATE datatype conversion for employee_dob is done at SELECT time and is not part of the external table definition.

See Also:

Oracle Database SQL Reference for detailed information about the correct way to specify date and time formats

Using External Tables to Load and Unload Data

In the context of external tables, loading data refers to the act of reading data from an external table and loading it into a table in the database. Unloading data refers to the act of reading data from a table in the database and inserting it into an external table.

Note:

Data can only be unloaded using the ORACLE_DATAPUMP access driver.

Loading Data

When data is loaded, the data stream is read from the files specified by the LOCATION and DEFAULT DIRECTORY clauses. The INSERT statement generates a flow of data from the external data source to the Oracle SQL engine, where data is processed. As data from the external source is parsed by the access driver and provided to the external table interface, it is converted from its external representation to its Oracle internal datatype.

Unloading Data Using the ORACLE_DATAPUMP Access Driver

To unload data, you use the ORACLE_DATAPUMP access driver. The data stream that is unloaded is in a proprietary format and contains all the column data for every row being unloaded.

An unload operation also creates a metadata stream that describes the contents of the data stream. The information in the metadata stream is required for loading the data stream. Therefore, the metadata stream is written to the datafile and placed before the data stream.

Dealing with Column Objects

When the external table is accessed through a SQL statement, the fields of the external table can be used just like any other field in a normal table. In particular, the fields can be used as arguments for any SQL built-in function, PL/SQL function, or Java function. This enables you to manipulate the data from the external source.

Although external tables cannot contain a column object, you can use constructor functions to build a column object from attributes in the external table. For example, assume a table in the database is defined as follows:

CREATE TYPE student_type AS object (
student_no CHAR(5),
name CHAR(20))
/

CREATE TABLE roster (
  student student_type,
  grade CHAR(2));

Also assume there is an external table defined as follows:

CREATE TABLE roster_data (
  student_no CHAR(5),
  name CHAR(20),
  grade CHAR(2))
  ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY ext_tab_dir
                         ACCESS PARAMETERS (FIELDS TERMINATED BY ',')
                         LOCATION ('info.dat'));

To load table roster from roster_data, you would specify something similar to the following:

INSERT INTO roster (student, grade)
  (SELECT student_type(student_no, name), grade FROM roster_data);

Datatype Conversion During External Table Use

When data is moved into or out of an external table, it is possible that the same column will have a different datatype in each of the following three places:

Any conversion errors that occur between the datafile and the external table cause the row with the error to be ignored. Any errors between the external table and the column in the database (including conversion errors and constraint violations) cause the entire operation to terminate unsuccessfully.

When data is unloaded into an external table, data conversion occurs if the datatype of a column in the source table does not match the datatype of the column in the external table. If a conversion error occurs, then the datafile may not contain all the rows that were processed up to that point and the datafile will not be readable. To avoid problems with conversion errors causing the operation to fail, the datatype of the column in the external table should match the datatype of the column in the database. This is not always possible, because external tables do not support all datatypes. In these cases, the unsupported datatypes in the source table must be converted into a datatype that the external table can support. For example, if a source table has a LONG column, the corresponding column in the external table must be a CLOB and the SELECT subquery that is used to populate the external table must use the TO_LOB operator to load the column. For example:

CREATE TABLE LONG_TAB_XT (LONG_COL CLOB) ORGANIZATION EXTERNAL...SELECT TO_LOB(LONG_COL) FROM LONG_TAB;

Parallel Access to External Tables

To enable external table support of parallel processing on the datafiles, use the PARALLEL clause when you create the external table. Each access driver supports parallel access slightly differently.

Parallel Access with ORACLE_LOADER

The ORACLE_LOADER access driver attempts to divide large datafiles into chunks that can be processed separately.

The following file, record, and data characteristics make it impossible for a file to be processed in parallel:

  • Sequential data sources (such as a tape drive or pipe)

  • Data in any multibyte character set whose character boundaries cannot be determined starting at an arbitrary byte in the middle of a string

    This restriction does not apply to any datafile with a fixed number of bytes per record.

  • Records with the VAR format

Specifying a PARALLEL clause is of value only when large amounts of data are involved.

Parallel Access with ORACLE_DATAPUMP

When you use the ORACLE_DATAPUMP access driver to unload data, the data is unloaded in parallel when the PARALLEL clause or parallel hint has been specified and when multiple locations have been specified for the external table.

Each parallel process writes to its own file. Therefore, the LOCATION clause should specify as many files as there are degrees of parallelism. If there are fewer files than the degree of parallelism specified, then the degree of parallelism will be limited to the number of files specified. If there are more files than the degree of parallelism specified, then the extra files will not be used.

In addition to unloading data, the ORACLE_DATAPUMP access driver can also load data. Parallel processes can read multiple dump files or even chunks of the same dump file concurrently. Thus, data can be loaded in parallel even if there is only one dump file, as long as that file is large enough to contain multiple file offsets. This is because when the ORACLE_DATAPUMP access driver unloads data, it periodically remembers the offset into the dump file of the start of a new data chunk and writes that information into the file when the unload completes. For nonparallel loads, file offsets are ignored because only one process at a time can access a file. For parallel loads, file offsets are distributed among parallel processes for multiple concurrent processing on a file or within a set of files.

Performance Hints When Using External Tables

When you monitor performance, the most important measurement is the elapsed time for a load. Other important measurements are CPU usage, memory usage, and I/O rates.

You can alter performance by increasing or decreasing the degree of parallelism. The degree of parallelism indicates the number of access drivers that can be started to process the datafiles. The degree of parallelism enables you to choose on a scale between slower load with little resource usage and faster load with all resources utilized. The access driver cannot automatically tune itself, because it cannot determine how many resources you want to dedicate to the access driver.

An additional consideration is that the access drivers use large I/O buffers for better performance. On databases with shared servers, all memory used by the access drivers comes out of the system global area (SGA). For this reason, you should be careful when using external tables on shared servers. Note that for the ORACLE_LOADER access driver, you can use the READSIZE clause in the access parameters to specify the size of the buffers.

Performance Hints Specific to the ORACLE_LOADER Access Driver

The information about performance provided in this section is specific to the ORACLE_LOADER access driver.

Performance can sometimes be increased with use of date cache functionality. By using the date cache to specify the number of unique dates anticipated during the load, you can reduce the number of date conversions done when many duplicate date or timestamp values are present in the input data. The date cache functionality provided by external tables is identical to the date cache functionality provided by SQL*Loader. See DATE_CACHE for a detailed description.

In addition to changing the degree of parallelism and using the date cache to improve performance, consider the following information:

  • Fixed-length records are processed faster than records terminated by a string.

  • Fixed-length fields are processed faster than delimited fields.

  • Single-byte character sets are the fastest to process.

  • Fixed-width character sets are faster to process than varying-width character sets.

  • Byte-length semantics for varying-width character sets are faster to process than character-length semantics.

  • Single-character delimiters for record terminators and field delimiters are faster to process than multicharacter delimiters.

  • Having the character set in the datafile match the character set of the database is faster than a character set conversion.

  • Having datatypes in the datafile match the datatypes in the database is faster than datatype conversion.

  • Not writing rejected rows to a reject file is faster because of the reduced overhead.

  • Condition clauses (including WHEN, NULLIF, and DEFAULTIF) slow down processing.

  • The access driver takes advantage of multithreading to streamline the work as much as possible.

External Table Restrictions

This section lists what the external tables feature does not do and also describes some processing restrictions.

Restrictions Specific to the ORACLE_DATAPUMP Access Driver

In addition to the restrictions just described, the ORACLE_DATAPUMP access driver has the following restrictions:

  • Handling of byte-order marks during a load: In an external table load for which the datafile character set is UTF8 or UTF16, it is not possible to suppress checking for byte-order marks. Suppression of byte-order mark checking is necessary only if the beginning of the datafile contains binary data that matches the byte-order mark encoding. (It is possible to suppress byte-order mark checking with SQL*Loader loads.) Note that checking for a byte-order mark does not mean that a byte-order mark must be present in the datafile. If no byte-order mark is present, the byte order of the server platform is used.

  • The external tables feature does not support the use of the backslash (\) escape character within strings. See Use of the Backslash Escape Character.

Behavior Differences Between SQL*Loader and External Tables

This section describes important differences between loading data with external tables, using the ORACLE_LOADER access driver, as opposed to loading data with SQL*Loader conventional and direct path loads. This information does not apply to the ORACLE_DATAPUMP access driver.

Multiple Primary Input Datafiles

If there are multiple primary input datafiles with SQL*Loader loads, a bad file and a discard file are created for each input datafile. With external table loads, there is only one bad file and one discard file for all input datafiles. If parallel access drivers are used for the external table load, each access driver has its own bad file and discard file.

Syntax and Datatypes

The following are not supported with external table loads:

  • Use of CONTINUEIF or CONCATENATE to combine multiple physical records into a single logical record.

  • Loading of the following SQL*Loader datatypes: GRAPHIC, GRAPHIC EXTERNAL, and VARGRAPHIC

  • Use of the following database column types: LONGs, nested tables, VARRAYs, REFs, primary key REFs, and SIDs

Byte-Order Marks

With SQL*Loader, if a primary datafile uses a Unicode character set (UTF8 or UTF16) and it also contains a byte-order mark (BOM), then the byte-order mark is written at the beginning of the corresponding bad and discard files. With external table loads, the byte-order mark is not written at the beginning of the bad and discard files.

Default Character Sets, Date Masks, and Decimal Separator

For fields in a datafile, the settings of NLS environment variables on the client determine the default character set, date mask, and decimal separator. For fields in external tables, the database settings of the NLS parameters determine the default character set, date masks, and decimal separator.

Use of the Backslash Escape Character

In SQL*Loader, you can use the backslash (\) escape character to mark a single quotation mark as a single quotation mark, as follows:

FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\''

In external tables, the use of the backslash escape character within a string will raise an error. The workaround is to use double quotation marks to mark the separation string, as follows:

TERMINATED BY ',' ENCLOSED BY "'"