Oracle® Database Concepts 10g Release 2 (10.2) Part Number B14220-02 |
|
|
View PDF |
This chapter discusses the different types of database objects contained in a user's schema.
This chapter contains the following topics:
A schema is a collection of logical structures of data, or schema objects. A schema is owned by a database user and has the same name as that user. Each user owns a single schema. Schema objects can be created and manipulated with SQL and include the following types of objects:
Clusters
Database links
Database triggers
Dimensions
External procedure libraries
Indexes and index types
Java classes, Java resources, and Java sources
Materialized views and materialized view logs
Object tables, object types, and object views
Operators
Sequences
Stored functions, procedures, and packages
Synonyms
Tables and index-organized tables
Views
Other types of objects are also stored in the database and can be created and manipulated with SQL but are not contained in a schema:
Contexts
Directories
Profiles
Roles
Tablespaces
Users
Schema objects are logical data storage structures. Schema objects do not have a one-to-one correspondence to physical files on disk that store their information. However, Oracle stores a schema object logically within a tablespace of the database. The data of each object is physically contained in one or more of the tablespace's datafiles. For some objects, such as tables, indexes, and clusters, you can specify how much disk space Oracle allocates for the object within the tablespace's datafiles.
There is no relationship between schemas and tablespaces: a tablespace can contain objects from different schemas, and the objects for a schema can be contained in different tablespaces.
Figure 5-1 illustrates the relationship among objects, tablespaces, and datafiles.
Figure 5-1 Schema Objects, Tablespaces, and Datafiles
See Also:
Oracle Database Administrator's GuideTables are the basic unit of data storage in an Oracle database. Data is stored in rows and columns. You define a table with a table name (such as employees
) and set of columns. You give each column a column name (such as employee_id
, last_name
, and job_id
), a datatype (such as VARCHAR2
, DATE
, or NUMBER
), and a width. The width can be predetermined by the datatype, as in DATE
. If columns are of the NUMBER
datatype, define precision and scale instead of width. A row is a collection of column information corresponding to a single record.
You can specify rules for each column of a table. These rules are called integrity constraints. One example is a NOT
NULL
integrity constraint. This constraint forces the column to contain a value in every row.
You can also specify table columns for which data is encrypted before being stored in the datafile. Encryption prevents users from circumventing database access control mechanisms by looking inside datafiles directly with operating system tools.
After you create a table, insert rows of data using SQL statements. Table data can then be queried, deleted, or updated using SQL.
Figure 5-2 shows a sample table.
See Also:
Oracle Database Administrator's Guide for information on managing tables
Oracle Database Advanced Security Administrator's Guide for information on transparent data encryption
When you create a table, Oracle automatically allocates a data segment in a tablespace to hold the table's future data. You can control the allocation and use of space for a table's data segment in the following ways:
You can control the amount of space allocated to the data segment by setting the storage parameters for the data segment.
You can control the use of the free space in the data blocks that constitute the data segment's extents by setting the PCTFREE
and PCTUSED
parameters for the data segment.
Oracle stores data for a clustered table in the data segment created for the cluster instead of in a data segment in a tablespace. Storage parameters cannot be specified when a clustered table is created or altered. The storage parameters set for the cluster always control the storage of all tables in the cluster.
A table's data segment (or cluster data segment, when dealing with a clustered table) is created in either the table owner's default tablespace or in a tablespace specifically named in the CREATE
TABLE
statement.
See Also:
"PCTFREE, PCTUSED, and Row Chaining"Oracle stores each row of a database table containing data for less than 256 columns as one or more row pieces. If an entire row can be inserted into a single data block, then Oracle stores the row as one row piece. However, if all of a row's data cannot be inserted into a single data block or if an update to an existing row causes the row to outgrow its data block, then Oracle stores the row using multiple row pieces. A data block usually contains only one row piece for each row. When Oracle must store a row in more than one row piece, it is chained across multiple blocks.
When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row.
Each row piece, chained or unchained, contains a row header and data for all or some of the row's columns. Individual columns can also span row pieces and, consequently, data blocks. Figure 5-3 shows the format of a row piece:
The row header precedes the data and contains information about:
Row pieces
Chaining (for chained row pieces only)
Columns in the row piece
Cluster keys (for clustered data only)
A row fully contained in one block has at least 3 bytes of row header. After the row header information, each row contains column length and data. The column length requires 1 byte for columns that store 250 bytes or less, or 3 bytes for columns that store more than 250 bytes, and precedes the column data. Space required for column data depends on the datatype. If the datatype of a column is variable length, then the space required to hold a value can grow and shrink with updates to the data.
To conserve space, a null in a column only stores the column length (zero). Oracle does not store data for the null column. Also, for trailing null columns, Oracle does not even store the column length.
Clustered rows contain the same information as nonclustered rows. In addition, they contain information that references the cluster key to which they belong.
See Also:
Oracle Database Administrator's Guide for more information about clustered rows and tables
The rowid identifies each row piece by its location or address. After they are assigned, a given row piece retains its rowid until the corresponding row is deleted or exported and imported using Oracle utilities. For clustered tables, if the cluster key values of a row change, then the row keeps the same rowid but also gets an additional pointer rowid for the new values.
Because rowids are constant for the lifetime of a row piece, it is useful to reference rowids in SQL statements such as SELECT
, UPDATE
, and DELETE
.
See Also:
"Physical Rowids"The column order is the same for all rows in a given table. Columns are usually stored in the order in which they were listed in the CREATE
TABLE
statement, but this is not guaranteed. For example, if a table has a column of datatype LONG
, then Oracle always stores this column last. Also, if a table is altered so that a new column is added, then the new column becomes the last column stored.
In general, try to place columns that frequently contain nulls last so that rows take less space. Note, though, that if the table you are creating includes a LONG
column as well, then the benefits of placing frequently null columns last are lost.
Oracle's table compression feature compresses data by eliminating duplicate values in a database block. Compressed data stored in a database block (also known as disk page) is self-contained. That is, all the information needed to re-create the uncompressed data in a block is available within that block. Duplicate values in all the rows and columns in a block are stored once at the beginning of the block, in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table.
With the exception of a symbol table at the beginning, compressed database blocks look very much like regular database blocks. All database features and functions that work on regular database blocks also work on compressed database blocks.
Database objects that can be compressed include tables and materialized views. For partitioned tables, you can choose to compress some or all partitions. Compression attributes can be declared for a tablespace, a table, or a partition of a table. If declared at the tablespace level, then all tables created in that tablespace are compressed by default. You can alter the compression attribute for a table (or a partition or tablespace), and the change only applies to new data going into that table. As a result, a single table or partition may contain some compressed blocks and some regular blocks. This guarantees that data size will not increase as a result of compression; in cases where compression could increase the size of a block, it is not applied to that block.
Compression occurs while data is being bulk inserted or bulk loaded. These operations include:
Direct path SQL*Loader
CREATE
TABLE
and AS
SELECT
statements
Parallel INSERT
(or serial INSERT
with an APPEND
hint) statements
Existing data in the database can also be compressed by moving it into compressed form through ALTER
TABLE
and MOVE
statements. This operation takes an exclusive lock on the table, and therefore prevents any updates and loads until it completes. If this is not acceptable, then Oracle's online redefinition utility (DBMS_REDEFINITION
PL/SQL package) can be used.
Data compression works for all datatypes except for all variants of LOBs and datatypes derived from LOBs, such as VARRAY
s stored out of line or the XML datatype stored in a CLOB
.
Table compression is done as part of bulk loading data into the database. The overhead associated with compression is most visible at that time. This is the primary trade-off that needs to be taken into account when considering compression.
Compressed tables or partitions can be modified the same as other Oracle tables or partitions. For example, data can be modified using INSERT
, UPATE
, and DELETE
statements. However, data modified without using bulk insertion or bulk loading techniques is not compressed. Deleting compressed data is as fast as deleting uncompressed data. Inserting new data is also as fast, because data is not compressed in the case of conventional INSERT
; it is compressed only doing bulk load. Updating compressed data can be slower in some cases. For these reasons, compression is more suitable for data warehousing applications than OLTP applications. Data should be organized such that read only or infrequently changing portions of the data (for example, historical data) is kept compressed.
A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT
NULL
or PRIMARY
KEY
integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS
NULL
predicate. Use the SQL function NVL
to convert nulls to non-null values.
Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.
See Also:
Oracle Database SQL Reference for comparisons using IS
NULL
and the NVL
function
You can assign a default value to a column of a table so that when a new row is inserted and a value for the column is omitted or keyword DEFAULT
is supplied, a default value is supplied automatically. Default column values work as though an INSERT
statement actually specifies the default value.
The datatype of the default literal or expression must match or be convertible to the column datatype.
If a default value is not explicitly defined for a column, then the default for the column is implicitly set to NULL
.
Integrity constraint checking occurs after the row with a default value is inserted. For example, in Figure 5-4, a row is inserted into the emp
table that does not include a value for the employee's department number. Because no value is supplied for the department number, Oracle inserts the deptno
column's default value of 20. After inserting the default value, Oracle checks the FOREIGN
KEY
integrity constraint defined on the deptno
column.
See Also:
Chapter 21, "Data Integrity" for more information about integrity constraintsPartitioned tables allow your data to be broken down into smaller, more manageable pieces called partitions, or even subpartitions. Indexes can be partitioned in similar fashion. Each partition can be managed individually, and can operate independently of the other partitions, thus providing a structure that can be better tuned for availability and performance.
Note:
To reduce disk use and memory use (specifically, the buffer cache), you can store tables and partitioned tables in a compressed format inside the database. This often leads to a better scaleup for read-only operations. Table compression can also speed up query execution. There is, however, a slight cost in CPU overhead.You can create a table with a column whose datatype is another table. That is, tables can be nested within other tables as values in a column. The Oracle database server stores nested table data out of line from the rows of the parent table, using a store table that is associated with the nested table column. The parent row contains a unique set identifier value associated with a nested table instance.
In addition to permanent tables, Oracle can create temporary tables to hold session-private data that exists only for the duration of a transaction or session.
The CREATE
GLOBAL
TEMPORARY
TABLE
statement creates a temporary table that can be transaction-specific or session-specific. For transaction-specific temporary tables, data exists for the duration of the transaction. For session-specific temporary tables, data exists for the duration of the session. Data in a temporary table is private to the session. Each session can only see and modify its own data. DML locks are not acquired on the data of the temporary tables. The LOCK
statement has no effect on a temporary table, because each session has its own private data.
A TRUNCATE
statement issued on a session-specific temporary table truncates data in its own session. It does not truncate the data of other sessions that are using the same table.
DML statements on temporary tables do not generate redo logs for the data changes. However, undo logs for the data and redo logs for the undo logs are generated. Data from the temporary table is automatically dropped in the case of session termination, either when the user logs off or when the session terminates abnormally such as during a session or instance failure.
You can create indexes for temporary tables using the CREATE
INDEX
statement. Indexes created on temporary tables are also temporary, and the data in the index has the same session or transaction scope as the data in the temporary table.
You can create views that access both temporary and permanent tables. You can also create triggers on temporary tables.
Oracle utilities can export and import the definition of a temporary table. However, no data rows are exported even if you use the ROWS
clause. Similarly, you can replicate the definition of a temporary table, but you cannot replicate its data.
Temporary tables use temporary segments. Unlike permanent tables, temporary tables and their indexes do not automatically allocate a segment when they are created. Instead, segments are allocated when the first INSERT
(or CREATE
TABLE AS
SELECT
) is performed. This means that if a SELECT
, UPDATE
, or DELETE
is performed before the first INSERT
, then the table appears to be empty.
You can perform DDL statements (ALTER
TABLE
, DROP
TABLE
, CREATE
INDEX,
and so on) on a temporary table only when no session is currently bound to it. A session gets bound to a temporary table when an INSERT
is performed on it. The session gets unbound by a TRUNCATE
, at session termination, or by doing a COMMIT
or ROLLBACK
for a transaction-specific temporary table.
Temporary segments are deallocated at the end of the transaction for transaction-specific temporary tables and at the end of the session for session-specific temporary tables.
See Also:
"Extents in Temporary Segments"Transaction-specific temporary tables are accessible by user transactions and their child transactions. However, a given transaction-specific temporary table cannot be used concurrently by two transactions in the same session, although it can be used by transactions in different sessions.
If a user transaction does an INSERT
into the temporary table, then none of its child transactions can use the temporary table afterward.
If a child transaction does an INSERT
into the temporary table, then at the end of the child transaction, the data associated with the temporary table goes away. After that, either the user transaction or any other child transaction can access the temporary table.
External tables access data in external sources as if it were in a table in the database. You can connect to the database and create metadata for the external table using DDL. The DDL for an external table consists of two parts: one part that describes the Oracle column types, and another part (the access parameters) that describes the mapping of the external data to the Oracle data columns.
An external table does not describe any data that is stored in the database. Nor does it describe how data is stored in the external source. Instead, it describes how the external table layer needs to present the data to the server. It is the responsibility of the access driver and the external table layer to do the necessary transformations required on the data in the datafile so that it matches the external table definition.
External tables are read only; therefore, no DML operations are possible, and no index can be created on them.
When you create an external table, you specify its type. Each type of external table has its own access driver that provides access parameters unique to that type of external table. The access driver ensures that data from the data source is processed so that it matches the definition of the external table.
In the context of external tables, loading data refers to the act of reading data from an external table and loading it into a table in the database. Unloading data refers to the act of reading data from a table in the database and inserting it into an external table.
The default type for external tables is ORACLE_LOADER
, which lets you read table data from an external table and load it into a database. Oracle also provides the ORACLE_DATAPUMP
type, which lets you unload data (that is, read data from a table in the database and insert it into an external table) and then reload it into an Oracle database.
The definition of an external table is kept separately from the description of the data in the data source. This means that:
The source file can contain more or fewer fields than there are columns in the external table
The datatypes for fields in the data source can be different from the columns in the external table
The main use for external tables is to use them as a row source for loading data into an actual table in the database. After you create an external table, you can then use a CREATE
TABLE
AS
SELECT
or INSERT
INTO
... AS
SELECT
statement, using the external table as the source of the SELECT
clause.
Note:
You cannot insert data into external tables or update records in them; external tables are read only.When you access the external table through a SQL statement, the fields of the external table can be used just like any other field in a regular table. In particular, you can use the fields as arguments for any SQL built-in function, PL/SQL function, or Java function. This lets you manipulate data from the external source. For data warehousing, you can do more sophisticated transformations in this way than you can with simple datatype conversions. You can also use this mechanism in data warehousing to do data cleansing.
While external tables cannot contain a column object, constructor functions can be used to build a column object from attributes in the external table
After the metadata for an external table is created, you can query the external data directly and in parallel, using SQL. As a result, the external table acts as a view, which lets you run any SQL query against external data without loading the external data into the database.
The degree of parallel access to an external table is specified using standard parallel hints and with the PARALLEL
clause. Using parallelism on an external table allows for concurrent access to the datafiles that comprise an external table. Whether a single file is accessed concurrently is dependent upon the access driver implementation, and attributes of the datafile(s) being accessed (for example, record formats).
See Also:
Oracle Database Administrator's Guide for information about managing external tables, external connections, and directories
Oracle Database Performance Tuning Guide for information about tuning loads from external tables
Oracle Database Utilities for information about external tables and import and export
Oracle Database SQL Reference for information about creating and querying external tables
A view is a tailored presentation of the data contained in one or more tables or other views. A view takes the output of a query and treats it as a table. Therefore, a view can be thought of as a stored query or a virtual table. You can use views in most places where a table can be used.
For example, the employees
table has several columns and numerous rows of information. If you want users to see only five of these columns or only specific rows, then you can create a view of that table for other users to access.
Figure 5-5 shows an example of a view called staff
derived from the base table employees
. Notice that the view shows only five of the columns in the base table.
Because views are derived from tables, they have many similarities. For example, you can define views with up to 1000 columns, just like a table. You can query views, and with some restrictions you can update, insert into, and delete from views. All operations performed on a view actually affect data in some base table of the view and are subject to the integrity constraints and triggers of the base tables.
You cannot explicitly define triggers on views, but you can define them for the underlying base tables referenced by the view. Oracle does support definition of logical constraints on views.
See Also:
Oracle Database SQL ReferenceUnlike a table, a view is not allocated any storage space, nor does a view actually contain data. Rather, a view is defined by a query that extracts or derives data from the tables that the view references. These tables are called base tables. Base tables can in turn be actual tables or can be views themselves (including materialized views). Because a view is based on other objects, a view requires no storage other than storage for the definition of the view (the stored query) in the data dictionary.
Views provide a means to present a different representation of the data that resides within the base tables. Views are very powerful because they let you tailor the presentation of data to different types of users. Views are often used to:
Provide an additional level of table security by restricting access to a predetermined set of rows or columns of a table
For example, Figure 5-5 shows how the STAFF
view does not show the salary
or commission_pct
columns of the base table employees
.
For example, a single view can be defined with a join, which is a collection of related columns or rows in multiple tables. However, the view hides the fact that this information actually originates from several tables.
Simplify statements for the user
For example, views allow users to select information from multiple tables without actually knowing how to perform a join.
Present the data in a different perspective from that of the base table
For example, the columns of a view can be renamed without affecting the tables on which the view is based.
Isolate applications from changes in definitions of base tables
For example, if a view's defining query references three columns of a four column table, and a fifth column is added to the table, then the view's definition is not affected, and all applications using the view are not affected.
Express a query that cannot be expressed without using a view
For example, a view can be defined that joins a GROUP
BY
view with a table, or a view can be defined that joins a UNION
view with a table.
Save complex queries
For example, a query can perform extensive calculations with table information. By saving this query as a view, you can perform the calculations each time the view is queried.
Oracle stores a view's definition in the data dictionary as the text of the query that defines the view. When you reference a view in a SQL statement, Oracle:
Merges the statement that references the view with the query that defines the view
Parses the merged statement in a shared SQL area
Executes the statement
Oracle parses a statement that references a view in a new shared SQL area only if no existing shared SQL area contains a similar statement. Therefore, you get the benefit of reduced memory use associated with shared SQL when you use views.
When Oracle evaluates views containing string literals or SQL functions that have globalization support parameters as arguments (such as TO_CHAR
, TO_DATE
, and TO_NUMBER
), Oracle takes default values for these parameters from the globalization support parameters for the session. You can override these default values by specifying globalization support parameters explicitly in the view definition.
See Also:
Oracle Database Globalization Support Guide for information about globalization supportOracle determines whether to use indexes for a query against a view by transforming the original query when merging it with the view's defining query.
Consider the following view:
CREATE VIEW employees_view AS SELECT employee_id, last_name, salary, location_id FROM employees JOIN departments USING (department_id) WHERE departments.department_id = 10;
Now consider the following user-issued query:
SELECT last_name FROM employees_view WHERE employee_id = 9876;
The final query constructed by Oracle is:
SELECT last_name FROM employees, departments WHERE employees.department_id = departments.department_id AND departments.department_id = 10 AND employees.employee_id = 9876;
In all possible cases, Oracle merges a query against a view with the view's defining query and those of any underlying views. Oracle optimizes the merged query as if you issued the query without referencing the views. Therefore, Oracle can use indexes on any referenced base table columns, whether the columns are referenced in the view definition or in the user query against the view.
In some cases, Oracle cannot merge the view definition with the user-issued query. In such cases, Oracle may not use all indexes on referenced columns.
See Also:
Oracle Database Performance Tuning Guide for more information about query optimizationBecause a view is defined by a query that references other objects (tables, materialized views, or other views), a view depends on the referenced objects. Oracle automatically handles the dependencies for views. For example, if you drop a base table of a view and then create it again, Oracle determines whether the new base table is acceptable to the existing definition of the view.
A join view is defined as a view that has more than one table or view in its FROM
clause (a join) and that does not use any of these clauses: DISTINCT
, aggregation, GROUP
BY
, START
WITH
, CONNECT
BY
, ROWNUM
, and set operations (UNION ALL
, INTERSECT
, and so on).
An updatable join view is a join view that involves two or more base tables or views, where UPDATE
, INSERT
, and DELETE
operations are permitted. The data dictionary views ALL_UPDATABLE_COLUMNS
, DBA_UPDATABLE_COLUMNS
, and USER_UPDATABLE_COLUMNS
contain information that indicates which of the view columns are updatable. In order to be inherently updatable, a view cannot contain any of the following constructs:
A set operator
A DISTINCT
operator
An aggregate or analytic function
A GROUP
BY
, ORDER
BY
, CONNECT
BY
, or START
WITH
clause
A collection expression in a SELECT
list
A subquery in a SELECT
list
Joins (with some exceptions)
Views that are not updatable can be modified using INSTEAD
OF
triggers.
See Also:
Oracle Database SQL Reference for more information about updatable views
In the Oracle object-relational database, an object view let you retrieve, update, insert, and delete relational data as if it was stored as an object type. You can also define views with columns that are object datatypes, such as objects, REF
s, and collections (nested tables and VARRAY
s).
An inline view is not a schema object. It is a subquery with an alias (correlation name) that you can use like a view within a SQL statement.
See Also:
Oracle Database SQL Reference for information about subqueries
Oracle Database Performance Tuning Guide for an example of an inline query causing a view
Materialized views are schema objects that can be used to summarize, compute, replicate, and distribute data. They are suitable in various computing environments such as data warehousing, decision support, and distributed or mobile computing:
In data warehouses, materialized views are used to compute and store aggregated data such as sums and averages. Materialized views in these environments are typically referred to as summaries because they store summarized data. They can also be used to compute joins with or without aggregations. If compatibility is set to Oracle9i or higher, then materialized views can be used for queries that include filter selections.
The optimizer can use materialized views to improve query performance by automatically recognizing when a materialized view can and should be used to satisfy a request. The optimizer transparently rewrites the request to use the materialized view. Queries are then directed to the materialized view and not to the underlying detail tables or views.
In distributed environments, materialized views are used to replicate data at distributed sites and synchronize updates done at several sites with conflict resolution methods. The materialized views as replicas provide local access to data that otherwise has to be accessed from remote sites.
In mobile computing environments, materialized views are used to download a subset of data from central servers to mobile clients, with periodic refreshes from the central servers and propagation of updates by clients back to the central servers.
Materialized views are similar to indexes in several ways:
They consume storage space.
They must be refreshed when the data in their master tables changes.
They improve the performance of SQL execution when they are used for query rewrites.
Their existence is transparent to SQL applications and users.
Unlike indexes, materialized views can be accessed directly using a SELECT
statement. Depending on the types of refresh that are required, they can also be accessed directly in an INSERT
, UPDATE
, or DELETE
statement.
A materialized view can be partitioned. You can define a materialized view on a partitioned table and one or more indexes on the materialized view.
See Also:
Oracle Database Data Warehousing Guide for information about materialized views in a data warehousing environment
Data warehousing applications recognize multidimensional data in the Oracle database by identifying Referential Integrity (RI) constraints in the relational schema. RI constraints represent primary and foreign key relationships among tables. By querying the Oracle data dictionary, applications can recognize RI constraints and therefore recognize the multidimensional data in the database. In some environments, database administrators, for schema complexity or security reasons, define views on fact and dimension tables. Oracle provides the ability to constrain views. By allowing constraint definitions between views, database administrators can propagate base table constraints to the views, thereby allowing applications to recognize multidimensional data even in a restricted environment.
Only logical constraints, that is, constraints that are declarative and not enforced by Oracle, can be defined on views. The purpose of these constraints is not to enforce any business rules but to identify multidimensional data. The following constraints can be defined on views:
Primary key constraint
Unique constraint
Referential Integrity constraint
Given that view constraints are declarative, DISABLE
, NOVALIDATE
is the only valid state for a view constraint. However, the RELY
or NORELY
state is also allowed, because constraints on views may be used to enable more sophisticated query rewrites; a view constraint in the RELY
state allows query rewrites to occur when the rewrite integrity level is set to trusted mode.
Note:
Although view constraint definitions are declarative in nature, operations on views are subject to the integrity constraints defined on the underlying base tables, and constraints on views can be enforced through constraints on base tables.Oracle maintains the data in materialized views by refreshing them after changes are made to their master tables. The refresh method can be incremental (fast refresh) or complete. For materialized views that use the fast refresh method, a materialized view log or direct loader log keeps a record of changes to the master tables.
Materialized views can be refreshed either on demand or at regular time intervals. Alternatively, materialized views in the same database as their master tables can be refreshed whenever a transaction commits its changes to the master tables.
A materialized view log is a schema object that records changes to a master table's data so that a materialized view defined on the master table can be refreshed incrementally.
Each materialized view log is associated with a single master table. The materialized view log resides in the same database and schema as its master table.
See Also:
Oracle Database Data Warehousing Guide for information about materialized views and materialized view logs in a warehousing environment
Oracle Database Advanced Replication for information about materialized views used for replication
A dimension defines hierarchical (parent/child) relationships between pairs of columns or column sets. Each value at the child level is associated with one and only one value at the parent level. A hierarchical relationship is a functional dependency from one level of a hierarchy to the next level in the hierarchy. A dimension is a container of logical relationships between columns, and it does not have any data storage assigned to it.
The CREATE
DIMENSION
statement specifies:
Multiple LEVEL
clauses, each of which identifies a column or column set in the dimension
One or more HIERARCHY
clauses that specify the parent/child relationships between adjacent levels
Optional ATTRIBUTE
clauses, each of which identifies an additional column or column set associated with an individual level
The columns in a dimension can come either from the same table (denormalized) or from multiple tables (fully or partially normalized). To define a dimension over columns from multiple tables, connect the tables using the JOIN
clause of the HIERARCHY
clause.
For example, a normalized time dimension can include a date table, a month table, and a year table, with join conditions that connect each date row to a month row, and each month row to a year row. In a fully denormalized time dimension, the date, month, and year columns are all in the same table. Whether normalized or denormalized, the hierarchical relationships among the columns need to be specified in the CREATE
DIMENSION
statement.
See Also:
Oracle Database Data Warehousing Guide for information about how dimensions are used in a warehousing environment
Oracle Database SQL Reference for information about creating dimensions
The sequence generator provides a sequential series of numbers. The sequence generator is especially useful in multiuser environments for generating unique sequential numbers without the overhead of disk I/O or transaction locking. For example, assume two users are simultaneously inserting new employee rows into the employees
table. By using a sequence to generate unique employee numbers for the employee_id
column, neither user has to wait for the other to enter the next available employee number. The sequence automatically generates the correct values for each user.
Therefore, the sequence generator reduces serialization where the statements of two transactions must generate sequential numbers at the same time. By avoiding the serialization that results when multiple users wait for each other to generate and use a sequence number, the sequence generator improves transaction throughput, and a user's wait is considerably shorter.
Sequence numbers are Oracle integers of up to 38 digits defined in the database. A sequence definition indicates general information, such as the following:
The name of the sequence
Whether the sequence ascends or descends
The interval between numbers
Whether Oracle should cache sets of generated sequence numbers in memory
Oracle stores the definitions of all sequences for a particular database as rows in a single data dictionary table in the SYSTEM
tablespace. Therefore, all sequence definitions are always available, because the SYSTEM
tablespace is always online.
Sequence numbers are used by SQL statements that reference the sequence. You can issue a statement to generate a new sequence number or use the current sequence number. After a statement in a user's session generates a sequence number, the particular sequence number is available only to that session. Each user that references a sequence has access to the current sequence number.
Sequence numbers are generated independently of tables. Therefore, the same sequence generator can be used for more than one table. Sequence number generation is useful to generate unique primary keys for your data automatically and to coordinate keys across multiple rows or tables. Individual sequence numbers can be skipped if they were generated and used in a transaction that was ultimately rolled back. Applications can make provisions to catch and reuse these sequence numbers, if desired.
Caution:
If your application can never lose sequence numbers, then you cannot use Oracle sequences, and you may choose to store sequence numbers in database tables. Be careful when implementing sequence generators using database tables. Even in a single instance configuration, for a high rate of sequence values generation, a performance overhead is associated with the cost of locking the row that stores the sequence value.
See Also:
Oracle Database Application Developer's Guide - Fundamentals for performance implications when using sequences
Oracle Database SQL Reference for information about the CREATE SEQUENCE
statement
A synonym is an alias for any table, view, materialized view, sequence, procedure, function, package, type, Java class schema object, user-defined object type, or another synonym. Because a synonym is simply an alias, it requires no storage other than its definition in the data dictionary.
Synonyms are often used for security and convenience. For example, they can do the following:
Mask the name and owner of an object
Provide location transparency for remote objects of a distributed database
Simplify SQL statements for database users
Enable restricted access similar to specialized views when exercising fine-grained access control
You can create both public and private synonyms. A public synonym is owned by the special user group named PUBLIC
and every user in a database can access it. A private synonym is in the schema of a specific user who has control over its availability to others.
Synonyms are very useful in both distributed and nondistributed database environments because they hide the identity of the underlying object, including its location in a distributed system. This is advantageous because if the underlying object must be renamed or moved, then only the synonym needs to be redefined. Applications based on the synonym continue to function without modification.
Synonyms can also simplify SQL statements for users in a distributed database system. The following example shows how and why public synonyms are often created by a database administrator to hide the identity of a base table and reduce the complexity of SQL statements. Assume the following:
A table called SALES_DATA
is in the schema owned by the user JWARD
.
The SELECT
privilege for the SALES_DATA
table is granted to PUBLIC
.
At this point, you have to query the table SALES_DATA
with a SQL statement similar to the following:
SELECT * FROM jward.sales_data;
Notice how you must include both the schema that contains the table along with the table name to perform the query.
Assume that the database administrator creates a public synonym with the following SQL statement:
CREATE PUBLIC SYNONYM sales FOR jward.sales_data;
After the public synonym is created, you can query the table SALES_DATA
with a simple SQL statement:
SELECT * FROM sales;
Notice that the public synonym SALES
hides the name of the table SALES_DATA
and the name of the schema that contains the table.
Indexes are optional structures associated with tables and clusters. You can create indexes on one or more columns of a table to speed SQL statement execution on that table. Just as the index in this manual helps you locate information faster than if there were no index, an Oracle index provides a faster access path to table data. Indexes are the primary means of reducing disk I/O when properly used.
You can create many indexes for a table as long as the combination of columns differs for each index. You can create more than one index using the same columns if you specify distinctly different combinations of the columns. For example, the following statements specify valid combinations:
CREATE INDEX employees_idx1 ON employees (last_name, job_id); CREATE INDEX employees_idx2 ON employees (job_id, last_name);
Oracle provides several indexing schemes, which provide complementary performance functionality:
B-tree indexes
B-tree cluster indexes
Hash cluster indexes
Reverse key indexes
Bitmap indexes
Bitmap join indexes
Oracle also provides support for function-based indexes and domain indexes specific to an application or cartridge.
The absence or presence of an index does not require a change in the wording of any SQL statement. An index is merely a fast access path to the data. It affects only the speed of execution. Given a data value that has been indexed, the index points directly to the location of the rows containing that value.
Indexes are logically and physically independent of the data in the associated table. You can create or drop an index at any time without affecting the base tables or other indexes. If you drop an index, all applications continue to work. However, access of previously indexed data can be slower. Indexes, as independent structures, require storage space.
Oracle automatically maintains and uses indexes after they are created. Oracle automatically reflects changes to data, such as adding new rows, updating rows, or deleting rows, in all relevant indexes with no additional action by users.
Retrieval performance of indexed data remains almost constant, even as new rows are inserted. However, the presence of many indexes on a table decreases the performance of updates, deletes, and inserts, because Oracle must also update the indexes associated with the table.
The optimizer can use an existing index to build another index. This results in a much faster index build.
Indexes can be unique or nonunique. Unique indexes guarantee that no two rows of a table have duplicate values in the key column (or columns). Nonunique indexes do not impose this restriction on the column values.
Oracle recommends that unique indexes be created explicitly, using CREATE
UNIQUE
INDEX
. Creating unique indexes through a primary key or unique constraint is not guaranteed to create a new index, and the index they create is not guaranteed to be a unique index.
See Also:
Oracle Database Administrator's Guide for information about creating unique indexes explicitlyA composite index (also called a concatenated index) is an index that you create on multiple columns in a table. Columns in a composite index can appear in any order and need not be adjacent in the table.
Composite indexes can speed retrieval of data for SELECT
statements in which the WHERE
clause references all or the leading portion of the columns in the composite index. Therefore, the order of the columns used in the definition is important. Generally, the most commonly accessed or most selective columns go first.
Figure 5-6 illustrates the VENDOR_PARTS
table that has a composite index on the VENDOR_ID
and PART_NO
columns.
No more than 32 columns can form a regular composite index. For a bitmap index, the maximum number columns is 30. A key value cannot exceed roughly half (minus some overhead) the available data space in a data block.
See Also:
Oracle Database Performance Tuning Guide for more information about using composite indexesAlthough the terms are often used interchangeably, indexes and keys are different. Indexes are structures actually stored in the database, which users create, alter, and drop using SQL statements. You create an index to provide a fast access path to table data. Keys are strictly a logical concept. Keys correspond to another feature of Oracle called integrity constraints, which enforce the business rules of a database.
Because Oracle uses indexes to enforce some integrity constraints, the terms key and index are often are used interchangeably. However, do not confuse them with each other.
See Also:
Chapter 21, "Data Integrity"NULL
values in indexes are considered to be distinct except when all the non-NULL
values in two or more rows of an index are identical, in which case the rows are considered to be identical. Therefore, UNIQUE
indexes prevent rows containing NULL
values from being treated as identical. This does not apply if there are no non-NULL
values—in other words, if the rows are entirely NULL
.
Oracle does not index table rows in which all key columns are NULL
, except in the case of bitmap indexes or when the cluster key column value is NULL
.
See Also:
"Bitmap Indexes and Nulls"You can create indexes on functions and expressions that involve one or more columns in the table being indexed. A function-based index computes the value of the function or expression and stores it in the index. You can create a function-based index as either a B-tree or a bitmap index.
The function used for building the index can be an arithmetic expression or an expression that contains a PL/SQL function, package function, C callout, or SQL function. The expression cannot contain any aggregate functions, and it must be DETERMINISTIC
. For building an index on a column containing an object type, the function can be a method of that object, such as a map method. However, you cannot build a function-based index on a LOB
column, REF
, or nested table column, nor can you build a function-based index if the object type contains a LOB
, REF
, or nested table.
See Also:
Oracle Database Performance Tuning Guide for more information about using function-based indexes
Function-based indexes provide an efficient mechanism for evaluating statements that contain functions in their WHERE
clauses. The value of the expression is computed and stored in the index. When it processes INSERT
and UPDATE
statements, however, Oracle must still evaluate the function to process the statement.
For example, if you create the following index:
CREATE INDEX idx ON table_1 (a + b * (c - 1), a, b);
then Oracle can use it when processing queries such as this:
SELECT a FROM table_1 WHERE a + b * (c - 1) < 100;
Function-based indexes defined on UPPER(
column_name
)
or LOWER(
column_name
)
can facilitate case-insensitive searches. For example, the following index:
CREATE INDEX uppercase_idx ON employees (UPPER(first_name));
can facilitate processing queries such as this:
SELECT * FROM employees WHERE UPPER(first_name) = 'RICHARD';
A function-based index can also be used for a globalization support sort index that provides efficient linguistic collation in SQL statements.
See Also:
Oracle Database Globalization Support Guide for information about linguistic indexesYou must gather statistics about function-based indexes for the optimizer. Otherwise, the indexes cannot be used to process SQL statements.
The optimizer can use an index range scan on a function-based index for queries with expressions in WHERE
clause. For example, in this query:
SELECT * FROM t WHERE a + b < 10;
the optimizer can use index range scan if an index is built on a+b
. The range scan access path is especially beneficial when the predicate (WHERE
clause) has low selectivity. In addition, the optimizer can estimate the selectivity of predicates involving expressions more accurately if the expressions are materialized in a function-based index.
The optimizer performs expression matching by parsing the expression in a SQL statement and then comparing the expression trees of the statement and the function-based index. This comparison is case-insensitive and ignores blank spaces.
See Also:
Oracle Database Performance Tuning Guide for more information about gathering statisticsFunction-based indexes depend on the function used in the expression that defines the index. If the function is a PL/SQL function or package function, the index is disabled by any changes to the function specification.
To create a function-based index, the user must be granted CREATE
INDEX
or CREATE
ANY
INDEX
.
To use a function-based index:
The table must be analyzed after the index is created.
The query must be guaranteed not to need any NULL
values from the indexed expression, because NULL
values are not stored in indexes.
The following sections describe additional requirements.
Any user-written function used in a function-based index must have been declared with the DETERMINISTIC
keyword to indicate that the function will always return the same output return value for any given set of input argument values, now and in the future.
See Also:
Oracle Database Performance Tuning GuideThe index owner needs the EXECUTE
privilege on the function used to define a function-based index. If the EXECUTE
privilege is revoked, Oracle marks the index DISABLED
. The index owner does not need the EXECUTE
WITH
GRANT
OPTION
privilege on this function to grant SELECT
privileges on the underlying table.
A function-based index depends on any function that it is using. If the function or the specification of a package containing the function is redefined (or if the index owner's EXECUTE
privilege is revoked), then the following conditions hold:
The index is marked as DISABLED
.
Queries on a DISABLED
index fail if the optimizer chooses to use the index.
DML operations on a DISABLED
index fail unless the index is also marked UNUSABLE
and the initialization parameter SKIP_UNUSABLE_INDEXES
is set to true.
To re-enable the index after a change to the function, use the ALTER
INDEX
... ENABLE
statement.
When you create an index, Oracle automatically allocates an index segment to hold the index's data in a tablespace. You can control allocation of space for an index's segment and use of this reserved space in the following ways:
Set the storage parameters for the index segment to control the allocation of the index segment's extents.
Set the PCTFREE
parameter for the index segment to control the free space in the data blocks that constitute the index segment's extents.
The tablespace of an index's segment is either the owner's default tablespace or a tablespace specifically named in the CREATE
INDEX
statement. You do not have to place an index in the same tablespace as its associated table. Furthermore, you can improve performance of queries that use an index by storing an index and its table in different tablespaces located on different disk drives, because Oracle can retrieve both index and table data in parallel.
See Also:
"PCTFREE, PCTUSED, and Row Chaining"Space available for index data is the Oracle block size minus block overhead, entry overhead, rowid, and one length byte for each value indexed.
When you create an index, Oracle fetches and sorts the columns to be indexed and stores the rowid along with the index value for each row. Then Oracle loads the index from the bottom up. For example, consider the statement:
CREATE INDEX employees_last_name ON employees(last_name);
Oracle sorts the employees
table on the last_name
column. It then loads the index with the last_name
and corresponding rowid values in this sorted order. When it uses the index, Oracle does a quick search through the sorted last_name
values and then uses the associated rowid values to locate the rows having the sought last_name
value.
Oracle uses B-trees to store indexes to speed up data access. With no indexes, you have to do a sequential scan on the data to find a value. For n rows, the average number of rows searched is n/2. This does not scale very well as data volumes increase.
Consider an ordered list of the values divided into block-wide ranges (leaf blocks). The end points of the ranges along with pointers to the blocks can be stored in a search tree and a value in log(n) time for n entries could be found. This is the basic principle behind Oracle indexes.
Figure 5-7 illustrates the structure of a B-tree index.
Figure 5-7 Internal Structure of a B-tree Index
The upper blocks (branch blocks) of a B-tree index contain index data that points to lower-level index blocks. The lowest level index blocks (leaf blocks) contain every indexed data value and a corresponding rowid used to locate the actual row. The leaf blocks are doubly linked. Indexes in columns containing character data are based on the binary values of the characters in the database character set.
For a unique index, one rowid exists for each data value. For a nonunique index, the rowid is included in the key in sorted order, so nonunique indexes are sorted by the index key and rowid. Key values containing all nulls are not indexed, except for cluster indexes. Two rows can both contain all nulls without violating a unique index.
The two kinds of blocks:
Branch blocks for searching
Leaf blocks that store the values
The B-tree structure has the following advantages:
All leaf blocks of the tree are at the same depth, so retrieval of any record from anywhere in the index takes approximately the same amount of time.
B-tree indexes automatically stay balanced.
All blocks of the B-tree are three-quarters full on the average.
B-trees provide excellent retrieval performance for a wide range of queries, including exact match and range searches.
Inserts, updates, and deletes are efficient, maintaining key order for fast retrieval.
B-tree performance is good for both small and large tables and does not degrade as the size of a table grows.
See Also:
Computer science texts for more information about B-tree indexesIndex unique scan is one of the most efficient ways of accessing data. This access method is used for returning the data from B-tree indexes. The optimizer chooses a unique scan when all columns of a unique (B-tree) index are specified with equality conditions.
Index range scan is a common operation for accessing selective data. It can be bounded (bounded on both sides) or unbounded (on one or both sides). Data is returned in the ascending order of index columns. Multiple rows with identical values are sorted (in ascending order) by the ROWID
s.
Key compression lets you compress portions of the primary key column values in an index or index-organized table, which reduces the storage overhead of repeated values.
Generally, keys in an index have two pieces, a grouping piece and a unique piece. If the key is not defined to have a unique piece, Oracle provides one in the form of a rowid appended to the grouping piece. Key compression is a method of breaking off the grouping piece and storing it so it can be shared by multiple unique pieces.
Key compression breaks the index key into a prefix entry (the grouping piece) and a suffix entry (the unique piece). Compression is achieved by sharing the prefix entries among the suffix entries in an index block. Only keys in the leaf blocks of a B-tree index are compressed. In the branch blocks the key suffix can be truncated, but the key is not compressed.
Key compression is done within an index block but not across multiple index blocks. Suffix entries form the compressed version of index rows. Each suffix entry references a prefix entry, which is stored in the same index block as the suffix entry.
By default, the prefix consists of all key columns excluding the last one. For example, in a key made up of three columns (column1, column2, column3) the default prefix is (column1, column2). For a list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4) the repeated occurrences of (1,2), (1,3) in the prefix are compressed.
Alternatively, you can specify the prefix length, which is the number of columns in the prefix. For example, if you specify prefix length 1, then the prefix is column1 and the suffix is (column2, column3). For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4) the repeated occurrences of 1 in the prefix are compressed.
The maximum prefix length for a nonunique index is the number of key columns, and the maximum prefix length for a unique index is the number of key columns minus one.
Prefix entries are written to the index block only if the index block does not already contain a prefix entry whose value is equal to the present prefix entry. Prefix entries are available for sharing immediately after being written to the index block and remain available until the last deleted referencing suffix entry is cleaned out of the index block.
Key compression can lead to a huge saving in space, letting you store more keys in each index block, which can lead to less I/O and better performance.
Although key compression reduces the storage requirements of an index, it can increase the CPU time required to reconstruct the key column values during an index scan. It also incurs some additional storage overhead, because every prefix entry has an overhead of 4 bytes associated with it.
Key compression is useful in many different scenarios, such as:
In a nonunique regular index, Oracle stores duplicate keys with the rowid appended to the key to break the duplicate rows. If key compression is used, Oracle stores the duplicate key as a prefix entry on the index block without the rowid. The rest of the rows are suffix entries that consist of only the rowid.
This same behavior can be seen in a unique index that has a key of the form (item, time stamp), for example (stock_ticker
, transaction_time
). Thousands of rows can have the same stock_ticker
value, with transaction_time
preserving uniqueness. On a particular index block a stock_ticker
value is stored only once as a prefix entry. Other entries on the index block are transaction_time
values stored as suffix entries that reference the common stock_ticker
prefix entry.
In an index-organized table that contains a VARRAY
or NESTED
TABLE
datatype, the object identifier is repeated for each element of the collection datatype. Key compression lets you compress the repeating object identifier values.
In some cases, however, key compression cannot be used. For example, in a unique index with a single attribute key, key compression is not possible, because even though there is a unique piece, there are no grouping pieces to share.
See Also:
"Overview of Index-Organized Tables"Creating a reverse key index, compared to a standard index, reverses the bytes of each column indexed (except the rowid) while keeping the column order. Such an arrangement can help avoid performance degradation with Real Application Clusters where modifications to the index are concentrated on a small set of leaf blocks. By reversing the keys of the index, the insertions become distributed across all leaf keys in the index.
Using the reverse key arrangement eliminates the ability to run an index range scanning query on the index. Because lexically adjacent keys are not stored next to each other in a reverse-key index, only fetch-by-key or full-index (table) scans can be performed.
Sometimes, using a reverse-key index can make an OLTP Real Application Clusters application faster. For example, keeping the index of mail messages in an e-mail application: some users keep old messages, and the index must maintain pointers to these as well as to the most recent.
The REVERSE
keyword provides a simple mechanism for creating a reverse key index. You can specify the keyword REVERSE
along with the optional index specifications in a CREATE
INDEX
statement:
CREATE INDEX i ON t (a,b,c) REVERSE;
You can specify the keyword NOREVERSE
to REBUILD
a reverse-key index into one that is not reverse keyed:
ALTER INDEX i REBUILD NOREVERSE;
Rebuilding a reverse-key index without the NOREVERSE
keyword produces a rebuilt, reverse-key index.
The purpose of an index is to provide pointers to the rows in a table that contain a given key value. In a regular index, this is achieved by storing a list of rowids for each key corresponding to the rows with that key value. Oracle stores each key value repeatedly with each stored rowid. In a bitmap index, a bitmap for each key value is used instead of a list of rowids.
Each bit in the bitmap corresponds to a possible rowid. If the bit is set, then it means that the row with the corresponding rowid contains the key value. A mapping function converts the bit position to an actual rowid, so the bitmap index provides the same functionality as a regular index even though it uses a different representation internally. If the number of different key values is small, then bitmap indexes are very space efficient.
Bitmap indexing efficiently merges indexes that correspond to several conditions in a WHERE
clause. Rows that satisfy some, but not all, conditions are filtered out before the table itself is accessed. This improves response time, often dramatically.
Bitmap indexing benefits data warehousing applications which have large amounts of data and ad hoc queries but a low level of concurrent transactions. For such applications, bitmap indexing provides:
Reduced response time for large classes of ad hoc queries
A substantial reduction of space use compared to other indexing techniques
Dramatic performance gains even on very low end hardware
Very efficient parallel DML and loads
Fully indexing a large table with a traditional B-tree index can be prohibitively expensive in terms of space, because the index can be several times larger than the data in the table. Bitmap indexes are typically only a fraction of the size of the indexed data in the table.
Bitmap indexes are not suitable for OLTP applications with large numbers of concurrent transactions modifying the data. These indexes are primarily intended for decision support in data warehousing applications where users typically query the data rather than update it.
Bitmap indexes are also not suitable for columns that are primarily queried with less than or greater than comparisons. For example, a salary column that usually appears in WHERE
clauses in a comparison to a certain value is better served with a B-tree index. Bitmapped indexes are only useful with equality queries, especially in combination with AND
, OR
, and NOT
operators.
Bitmap indexes are integrated with the Oracle optimizer and execution engine. They can be used seamlessly in combination with other Oracle execution methods. For example, the optimizer can decide to perform a hash join between two tables using a bitmap index on one table and a regular B-tree index on the other. The optimizer considers bitmap indexes and other available access methods, such as regular B-tree indexes and full table scan, and chooses the most efficient method, taking parallelism into account where appropriate.
Parallel query and parallel DML work with bitmap indexes as with traditional indexes. Bitmap indexes on partitioned tables must be local indexes. Parallel create index and concatenated indexes are also supported.
The advantages of using bitmap indexes are greatest for low cardinality columns: that is, columns in which the number of distinct values is small compared to the number of rows in the table. If the number of distinct values of a column is less than 1% of the number of rows in the table, or if the values in a column are repeated more than 100 times, then the column is a candidate for a bitmap index. Even columns with a lower number of repetitions and thus higher cardinality can be candidates if they tend to be involved in complex conditions in the WHERE
clauses of queries.
For example, on a table with 1 million rows, a column with 10,000 distinct values is a candidate for a bitmap index. A bitmap index on this column can out-perform a B-tree index, particularly when this column is often queried in conjunction with other columns.
B-tree indexes are most effective for high-cardinality data: that is, data with many possible values, such as CUSTOMER_NAME
or PHONE_NUMBER
. In some situations, a B-tree index can be larger than the indexed data. Used appropriately, bitmap indexes can be significantly smaller than a corresponding B-tree index.
In ad hoc queries and similar situations, bitmap indexes can dramatically improve query performance. AND
and OR
conditions in the WHERE
clause of a query can be quickly resolved by performing the corresponding Boolean operations directly on the bitmaps before converting the resulting bitmap to rowids. If the resulting number of rows is small, the query can be answered very quickly without resorting to a full table scan of the table.
Table 5-1 shows a portion of a company's customer data.
Table 5-1 Bitmap Index Example
CUSTOMER # | MARITAL_ STATUS | REGION | GENDER | INCOME_ LEVEL |
---|---|---|---|---|
101 |
single |
east |
male |
bracket_1 |
102 |
married |
central |
female |
bracket_4 |
103 |
married |
west |
female |
bracket_2 |
104 |
divorced |
west |
male |
bracket_4 |
105 |
single |
central |
female |
bracket_2 |
106 |
married |
central |
female |
bracket_3 |
MARITAL_STATUS
, REGION
, GENDER
, and INCOME_LEVEL
are all low-cardinality columns. There are only three possible values for marital status and region, two possible values for gender, and four for income level. Therefore, it is appropriate to create bitmap indexes on these columns. A bitmap index should not be created on CUSTOMER#
because this is a high-cardinality column. Instead, use a unique B-tree index on this column to provide the most efficient representation and retrieval.
Table 5-2 illustrates the bitmap index for the REGION
column in this example. It consists of three separate bitmaps, one for each region.
Table 5-2 Sample Bitmap
REGION='east' | REGION='central' | REGION='west' |
---|---|---|
1 |
0 |
0 |
0 |
1 |
0 |
0 |
0 |
1 |
0 |
0 |
1 |
0 |
1 |
0 |
0 |
1 |
0 |
Each entry or bit in the bitmap corresponds to a single row of the CUSTOMER
table. The value of each bit depends upon the values of the corresponding row in the table. For instance, the bitmap REGION='east'
contains a one as its first bit. This is because the region is east in the first row of the CUSTOMER
table. The bitmap REGION='east'
has a zero for its other bits because none of the other rows of the table contain east
as their value for REGION.
An analyst investigating demographic trends of the company's customers can ask, "How many of our married customers live in the central or west regions?" This corresponds to the following SQL query:
SELECT COUNT(*) FROM CUSTOMER WHERE MARITAL_STATUS = 'married' AND REGION IN ('central','west');
Bitmap indexes can process this query with great efficiency by counting the number of ones in the resulting bitmap, as illustrated in Figure 5-8. To identify the specific customers who satisfy the criteria, the resulting bitmap can be used to access the table.
Figure 5-8 Running a Query Using Bitmap Indexes
Bitmap indexes can include rows that have NULL
values, unlike most other types of indexes. Indexing of nulls can be useful for some types of SQL statements, such as queries with the aggregate function COUNT
.
Like other indexes, you can create bitmap indexes on partitioned tables. The only restriction is that bitmap indexes must be local to the partitioned table—they cannot be global indexes. Global bitmap indexes are supported only on nonpartitioned tables.
See Also:
Chapter 18, "Partitioned Tables and Indexes" for information about partitioned tables and descriptions of local and global indexes
Oracle Database Performance Tuning Guide for more information about using bitmap indexes, including an example of indexing null values
In addition to a bitmap index on a single table, you can create a bitmap join index, which is a bitmap index for the join of two or more tables. A bitmap join index is a space efficient way of reducing the volume of data that must be joined by performing restrictions in advance. For each value in a column of a table, a bitmap join index stores the rowids of corresponding rows in one or more other tables. In a data warehousing environment, the join condition is an equi-inner join between the primary key column or columns of the dimension tables and the foreign key column or columns in the fact table.
Bitmap join indexes are much more efficient in storage than materialized join views, an alternative for materializing joins in advance. This is because the materialized join views do not compress the rowids of the fact tables.
See Also:
Oracle Database Data Warehousing Guide for more information on bitmap join indexesAn index-organized table has a storage organization that is a variant of a primary B-tree. Unlike an ordinary (heap-organized) table whose data is stored as an unordered collection (heap), data for an index-organized table is stored in a B-tree index structure in a primary key sorted manner. Besides storing the primary key column values of an index-organized table row, each index entry in the B-tree stores the nonkey column values as well.
As shown in Figure 5-9, the index-organized table is somewhat similar to a configuration consisting of an ordinary table and an index on one or more of the table columns, but instead of maintaining two separate storage structures, one for the table and one for the B-tree index, the database system maintains only a single B-tree index. Also, rather than having a row's rowid stored in the index entry, the nonkey column values are stored. Thus, each B-tree index entry contains <primary_key_value
, non_primary_key_column_values>
.
Figure 5-9 Structure of a Regular Table Compared with an Index-Organized Table
Applications manipulate the index-organized table just like an ordinary table, using SQL statements. However, the database system performs all operations by manipulating the corresponding B-tree index.
Table 5-3 summarizes the differences between index-organized tables and ordinary tables.
Table 5-3 Comparison of Index-Organized Tables with Ordinary Tables
Ordinary Table | Index-Organized Table |
---|---|
Rowid uniquely identifies a row. Primary key can be optionally specified |
Primary key uniquely identifies a row. Primary key must be specified |
Physical rowid in |
Logical rowid in |
Access is based on rowid |
Access is based on logical rowid |
Sequential scan returns all rows |
Full-index scan returns all rows |
Can be stored in a cluster with other tables |
Cannot be stored in a cluster |
Can contain a column of the |
Can contain LOB columns but not |
Index-organized tables provide faster access to table rows by the primary key or any key that is a valid prefix of the primary key. Presence of nonkey columns of a row in the B-tree leaf block itself avoids an additional block access. Also, because rows are stored in primary key order, range access by the primary key (or a valid prefix) involves minimum block accesses.
In order to allow even faster access to frequently accessed columns, you can use a row overflow segment (as described later) to push out infrequently accessed nonkey columns from the B-tree leaf block to an optional (heap-organized) overflow segment. This allows limiting the size and content of the portion of a row that is actually stored in the B-tree leaf block, which may lead to a higher number of rows in each leaf block and a smaller B-tree.
Unlike a configuration of heap-organized table with a primary key index where primary key columns are stored both in the table and in the index, there is no such duplication here because primary key column values are stored only in the B-tree index.
Because rows are stored in primary key order, a significant amount of additional storage space savings can be obtained through the use of key compression.
Use of primary-key based logical rowids, as opposed to physical rowids, in secondary indexes on index-organized tables allows high availability. This is because, due to the logical nature of the rowids, secondary indexes do not become unusable even after a table reorganization operation that causes movement of the base table rows. At the same time, through the use of physical guess in the logical rowid, it is possible to get secondary index based index-organized table access performance that is comparable to performance for secondary index based access to an ordinary table.
See Also:
Oracle Database Administrator's Guide for information about creating and maintaining index-organized tables
B-tree index entries are usually quite small, because they only consist of the key value and a ROWID
. In index-organized tables, however, the B-tree index entries can be large, because they consist of the entire row. This may destroy the dense clustering property of the B-tree index.
Oracle provides the OVERFLOW
clause to handle this problem. You can specify an overflow tablespace so that, if necessary, a row can be divided into the following two parts that are then stored in the index and in the overflow storage area segment, respectively:
The index entry, containing column values for all the primary key columns, a physical rowid that points to the overflow part of the row, and optionally a few of the nonkey columns
The overflow part, containing column values for the remaining nonkey columns
With OVERFLOW
, you can use two clauses, PCTTHRESHOLD
and INCLUDING
, to control how Oracle determines whether a row should be stored in two parts and if so, at which nonkey column to break the row. Using PCTTHRESHOLD
, you can specify a threshold value as a percentage of the block size. If all the nonkey column values can be accommodated within the specified size limit, the row will not be broken into two parts. Otherwise, starting with the first nonkey column that cannot be accommodated, the rest of the nonkey columns are all stored in the row overflow segment for the table.
The INCLUDING
clause lets you specify a column name so that any nonkey column, appearing in the CREATE
TABLE
statement after that specified column, is stored in the row overflow segment. Note that additional nonkey columns may sometimes need to be stored in the overflow due to PCTTHRESHOLD-
based limits.
Secondary index support on index-organized tables provides efficient access to index-organized table using columns that are not the primary key nor a prefix of the primary key.
Oracle constructs secondary indexes on index-organized tables using logical row identifiers (logical rowids) that are based on the table's primary key. A logical rowid includes a physical guess, which identifies the block location of the row. Oracle can use these physical guesses to probe directly into the leaf block of the index-organized table, bypassing the primary key search. Because rows in index-organized tables do not have permanent physical addresses, the physical guesses can become stale when rows are moved to new blocks.
For an ordinary table, access by a secondary index involves a scan of the secondary index and an additional I/O to fetch the data block containing the row. For index-organized tables, access by a secondary index varies, depending on the use and accuracy of physical guesses:
Without physical guesses, access involves two index scans: a secondary index scan followed by a scan of the primary key index.
With accurate physical guesses, access involves a secondary index scan and an additional I/O to fetch the data block containing the row.
With inaccurate physical guesses, access involves a secondary index scan and an I/O to fetch the wrong data block (as indicated by the physical guess), followed by a scan of the primary key index.
See Also:
"Logical Rowids"Oracle supports bitmap indexes on partitioned and nonpartitioned index-organized tables. A mapping table is required for creating bitmap indexes on an index-organized table.
The mapping table is a heap-organized table that stores logical rowids of the index-organized table. Specifically, each mapping table row stores one logical rowid for the corresponding index-organized table row. Thus, the mapping table provides one-to-one mapping between logical rowids of the index-organized table rows and physical rowids of the mapping table rows.
A bitmap index on an index-organized table is similar to that on a heap-organized table except that the rowids used in the bitmap index on an index-organized table are those of the mapping table as opposed to the base table. There is one mapping table for each index-organized table and it is used by all the bitmap indexes created on that index-organized table.
In both heap-organized and index-organized base tables, a bitmap index is accessed using a search key. If the key is found, the bitmap entry is converted to a physical rowid. In the case of heap-organized tables, this physical rowid is then used to access the base table. However, in the case of index-organized tables, the physical rowid is then used to access the mapping table. The access to the mapping table yields a logical rowid. This logical rowid is used to access the index-organized table.
Though a bitmap index on an index-organized table does not store logical rowids, it is still logical in nature.
Note:
Movement of rows in an index-organized table does not leave the bitmap indexes built on that index-organized table unusable. Movement of rows in the index-organized table does invalidate the physical guess in some of the mapping table's logical rowid entries. However, the index-organized table can still be accessed using the primary key.You can partition an index-organized table by RANGE
, HASH
, or LIST
on column values. The partitioning columns must form a subset of the primary key columns. Just like ordinary tables, local partitioned (prefixed and non-prefixed) index as well as global partitioned (prefixed) indexes are supported for partitioned index-organized tables.
UROWID
datatype columns can hold logical primary key-based rowids identifying rows of index-organized tables. Oracle supports indexes on UROWID
datatypes of a heap- or index-organized table. The index supports equality predicates on UROWID
columns. For predicates other than equality or for ordering on UROWID
datatype columns, the index is not used.
The superior query performance for primary key based access, high availability aspects, and reduced storage requirements make index-organized tables ideal for the following kinds of applications:
Online transaction processing (OLTP)
Internet (for example, search engines and portals)
E-commerce (for example, electronic stores and catalogs)
Data warehousing
Analytic functions
Oracle provides extensible indexing to accommodate indexes on customized complex datatypes such as documents, spatial data, images, and video clips and to make use of specialized indexing techniques. With extensible indexing, you can encapsulate application-specific index management routines as an indextype schema object and define a domain index (an application-specific index) on table columns or attributes of an object type. Extensible indexing also provides efficient processing of application-specific operators.
The application software, called the cartridge, controls the structure and content of a domain index. The Oracle database server interacts with the application to build, maintain, and search the domain index. The index structure itself can be stored in the Oracle database as an index-organized table or externally as a file.
See Also:
Oracle Database Data Cartridge Developer's Guide for information about using data cartridges within Oracle's extensibility architectureClusters are an optional method of storing table data. A cluster is a group of tables that share the same data blocks because they share common columns and are often used together. For example, the employees
and departments
table share the department_id
column. When you cluster the employees
and departments
tables, Oracle physically stores all rows for each department from both the employees
and departments
tables in the same data blocks.
Figure 5-10 shows what happens when you cluster the employees
and departments
tables:
Because clusters store related rows of different tables together in the same data blocks, properly used clusters offers these benefits:
Disk I/O is reduced for joins of clustered tables.
Access time improves for joins of clustered tables.
In a cluster, a cluster key value is the value of the cluster key columns for a particular row. Each cluster key value is stored only once each in the cluster and the cluster index, no matter how many rows of different tables contain the value. Therefore, less storage is required to store related table and index data in a cluster than is necessary in nonclustered table format. For example, in Figure 5-10, notice how each cluster key (each department_id
) is stored just once for many rows that contain the same value in both the employees
and departments
tables.
See Also:
Oracle Database Administrator's Guide for information about creating and managing clustersHash clusters group table data in a manner similar to regular index clusters (clusters keyed with an index rather than a hash function). However, a row is stored in a hash cluster based on the result of applying a hash function to the row's cluster key value. All rows with the same key value are stored together on disk.
Hash clusters are a better choice than using an indexed table or index cluster when a table is queried frequently with equality queries (for example, return all rows for department 10). For such queries, the specified cluster key value is hashed. The resulting hash key value points directly to the area on disk that stores the rows.
Hashing is an optional way of storing table data to improve the performance of data retrieval. To use hashing, create a hash cluster and load tables into the cluster. Oracle physically stores the rows of a table in a hash cluster and retrieves them according to the results of a hash function.
Sorted hash clusters allow faster retrieval of data for applications where data is consumed in the order in which it was inserted.
Oracle uses a hash function to generate a distribution of numeric values, called hash values, which are based on specific cluster key values. The key of a hash cluster, like the key of an index cluster, can be a single column or composite key (multiple column key). To find or store a row in a hash cluster, Oracle applies the hash function to the row's cluster key value. The resulting hash value corresponds to a data block in the cluster, which Oracle then reads or writes on behalf of the issued statement.
A hash cluster is an alternative to a nonclustered table with an index or an index cluster. With an indexed table or index cluster, Oracle locates the rows in a table using key values that Oracle stores in a separate index. To find or store a row in an indexed table or cluster, at least two I/Os must be performed:
One or more I/Os to find or store the key value in the index
Another I/O to read or write the row in the table or cluster
See Also:
Oracle Database Administrator's Guide for information about creating and managing hash clusters