Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Microsoft Windows Part Number B14207-05 |
|
|
View PDF |
This chapter provides an overview of Oracle Clusterware and Oracle Real Application Clusters (RAC) installation and configuration procedures and includes the following topics:
Oracle Clusterware and Oracle Real Application Clusters Documentation Overview
General System Installation Requirements for Oracle Real Application Clusters
Cluster Setup and Pre-Installation Configuration Tasks for Real Application Clusters
Pre-Installation, Installation, and Post-Installation Overview
Storage Considerations for Installing Oracle Database 10g Real Application Clusters
Additional Considerations for Using Oracle Database 10g Features in RAC
Oracle Database 10g and Real Application Clusters Components
Oracle Database 10g Real Application Clusters Version Compatibility
This section describes the Oracle Clusterware and RAC documentation set.
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Microsoft Windows
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Microsoft Windows (this document) contains the pre-installation, installation, and post-installation information for Microsoft Windows. Additional information for this release may be available in the Oracle Database 10g README or Release Notes. The platform-specific Oracle Database 10g media contains a copy of this book in both HTML and PDF formats.
The Server Documentation media contains Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide describes how to administer Oracle Clusterware components such as the voting disks and the Oracle Cluster Registry (OCR) devices. This book also explains how to administer storage and how to use RAC scalability features to add and delete instances and nodes. This book also discusses how to use Recovery Manager (RMAN), and how to perform backup and recovery in RAC.
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide describes RAC deployment topics such as services, high availability, and workload management. The book describes how Automatic Workload Repository (AWR) tracks and reports service levels and how you can use service level thresholds and alerts to balance complex workloads in your RAC environment. The book also describes how to make your applications highly available using Oracle Clusterware.
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide also provides information about how to monitor and tune performance in RAC environments by using Oracle Enterprise Manager and by using information in AWR and Oracle performance views. This book also highlights some application-specific deployment techniques for online transaction processing and data warehousing environments.
Each node that is going to be part of your Oracle Clusterware and RAC installation must meet the hardware and software requirements described in this section. You can verify these requirements with Cluster Verification Utility. This book provides step-by-step tasks that you can follow to prepare your hardware and software to meet these requirements for your system in Part II of this book. You can verify that you have met these requirements with Cluster Verification Utility.
Before using this manual, however, you should read the Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide to inform yourself about concepts such as services, setting up storage, and other information relevant to configuring your cluster.
Cluster Verification Utility (CVU) is provided with Oracle Database 10g Release 2 (10.2) with Real Application Clusters. The purpose of CVU is to enable you or your hardware vendors to verify during setup and configuration that all components required for a successful installation of a RAC database are installed and configured correctly, and to provide you with ongoing assistance any time you need to make changes to your RAC cluster. You are provided with commands to use CVU to verify completion of tasks in this guide.
There are two types of CVU commands:
Stage Commands are CVU commands used to test system setup and readiness for successful software installation, database creation, or configuration change steps. These commands are also used to validate successful completion of specific cluster configuration steps.
Component Commands are CVU commands used to check individual cluster components, and determine their state.
This guide provides stage and component CVU commands where appropriate to assist you with cluster verification.
See Also:
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for detailed information about Cluster Verification Utility.Each node in a cluster requires the following hardware:
External shared disks for storing Oracle Clusterware and database files.
The disk configuration options available to you are described in Chapter 3, "Storage Pre-Installation Tasks". Review these options before you decide which storage option to use in your RAC environment. However, note that when Database Configuration Assistant (DBCA) configures automatic disk backup, it uses a database recovery area which must be shared. The database files and recovery files do not necessarily have to be located on the same type of storage.
One private internet protocol (IP) address for each node to serve as the private interconnect. The following must be true for each private IP address:
It must be separate from the public network
It must be accessible on the same network interface on each node
It must have a unique address on each node
The private interconnect is used for inter-node communication by both Oracle Clusterware and RAC. If the private address is available from a network name server (DNS), then you can use that name. Otherwise, the private IP address must be available in each node's C:\WINNT\system32\drivers\etc\hosts file.
During Oracle Clusterware installation, the information you enter as the private IP address determines which private interconnects are used by RAC database instances. If you define more than one interconnect, then they must all be in an up state, just as if their IP addresses were specified in the initialization parameter, CLUSTER_INTERCONNECTS. RAC does not fail over between cluster interconnects; if one is down then the instances using them will not start.
Oracle recommends that you use a logical Internet Protocol (IP) address that is available across all private networks, and that you take advantage of any available third party network interface cards that provide bonding to enable network failover by configuring them according to the vendor's instructions.
One public IP address for each node, to be used as the Virtual IP (VIP) address for client connections and for connection failover The name associated with the VIP must be different from the default host name.
This VIP must be associated with the same interface name on every node that is part of your cluster. In addition, the IP addresses that you use for all of the nodes that are part of a cluster must be from the same subnet. If you have a domain name server (DNS), then register the host names for the VIP with DNS. The Virtual IP address should not be in use at the time of the installation, because this is a Virtual IP address that Oracle manages.
One public fixed hostname address for each node, typically assigned by the system administrator during operating system installation. If you have a DNS, then register both the fixed IP and the VIP address with DNS. If you do not have DNS, then you must make sure that the public IP and VIP addresses for all nodes are in each node's host
file.
Note:
In addition to these requirements, Oracle recommends the following:While installing and using Real Application Clusters software, you should attempt to keep the system clocks on all of your cluster nodes as close as possible to the same time.
Use redundant switches as a standard configuration for all cluster sizes.
Each node in a cluster requires a supported interconnect software protocol to support Cache Fusion, and to support Oracle Clusterware polling. Your interconnect must be certified by Oracle for your platform. You should also have a Web browser, both to enable Oracle Enterprise Manager, and to view online documentation.
RAC databases on the same cluster must all be 64-bit or all 32-bit. A mix of 32-bit RAC databases and 64-bit RAC databases on the same cluster is not supported.
See Also:
Oracle Database Platform Guide for Microsoft Windows for additional information about the OSDBA and OSOPER groups, and theSYSDBA
and SYSOPER
privileges.Before installing RAC, perform the following procedures:
Ensure that you have a certified combination of operating system and Oracle software version by referring to the OracleMetaLink certification information, which is located at the following Web site:
https://metalink.oracle.com
Click Certify & Availability, and select 1.View Certifications by Product.
Note:
The layout of the OracleMetaLink site and the site's certification policies are subject to change.Configure a high-speed interconnect that uses a private network. Some platforms support automatic failover to an additional interconnect.
Determine the storage option for your system and configure the shared disk. Oracle recommends that you use Automatic Storage Management (ASM) and Oracle Managed Files (OMF), or a cluster file system. If you use ASM or a cluster file system, then you can also take advantage of OMF and other Oracle Database 10g storage features. If you use RAC on Oracle Database 10g Standard Edition, then you must use ASM.
If you intend to use multiple voting disks, then you need at least three voting disks to provide sufficient voting disk redundancy, and you should ensure that each voting disk is located on physically independent storage. When you start the Oracle Universal Installer (OUI) to install Oracle Clusterware, you are asked to provide the paths for each voting disk you want to configure: one disk, if you have existing redundancy support for the voting disk, or three disks to provide redundant voting disks managed by Oracle.
In addition, if you select multiple voting disks managed by Oracle, then you should ensure that all voting disks are located on a secure network protected from external security threats, and you should ensure that all voting disks are on regularly maintained systems. If a voting disk fails, then you need to fix the physical hardware and bring it back online. The Cluster Synchronization Services (CSS) component of Oracle Clusterware continues to use the other voting disks, and automatically makes use of the restored drive when it is brought online again.
Note:
If you use ASM, then Oracle recommends that you install ASM in a separate home from the Oracle Clusterware home and the Oracle home. You should particularly follow this recommendation if the ASM instance is to manage storage for more than one RAC database. Following this recommendation reduces downtime when upgrading or de-installing different versions of the software.Install the operating system patches that are listed in the pre-installation chapter in this book in Part II.
Use Cluster Verification Utility (CVU) to help you to verify that your system meets requirements for installing Oracle Database with RAC.
The following describes the installation procedures that are covered in Part II and Part III of this book.
The pre-installation procedures in Part II explain how to verify user equivalence, perform network connectivity tests, as well as how to set directory and file permissions. Complete all of the pre-installation procedures and verify that your system meets all of the pre-installation requirements before proceeding to the install phase.
Oracle Database 10g Real Application Clusters installation is a two-phase installation. In phase one, use Oracle Universal Installer (OUI) to install Oracle Clusterware as described in Chapter 4, "Installing Oracle Clusterware on Windows-Based Systems". In phase two, install the database software using OUI. Oracle Clusterware installation starts Oracle Clusterware processes in preparation for installing Oracle Database 10g. In phase two, you install Oracle database software for use with single-instance or with RAC databases. To install the database software for use with single-instance databases in phase two, refer to the Microsoft Windows installation guides. To install the database software with RAC in phase two, use OUI to install RAC software as described in Chapter 5, "Installing Oracle Database 10g with Real Application Clusters". Note that the Oracle home that you use in phase one is a home for Oracle Clusterware software which must be different from the Oracle home that you use in phase two.
If OUI detects a previous version of Oracle Database, then OUI starts Database Upgrade Assistant (DBUA) to upgrade your database to Oracle Database 10g Release 2 (10.2). In addition, DBUA displays a Service Configuration page for configuring services in your RAC database.
See Also:
Oracle Database Upgrade Guide for additional information about preparing for upgradesAfter the database software installation completes, OUI starts the Oracle assistants, such as Database Configuration Assistant (DBCA), to configure your environment and create your database. For a RAC database, you can later use DBCA Instance Management feature to add or modify services and instances as described in Chapter 6, "Creating Oracle RAC Databases with the Database Configuration Assistant".
After you create your database, download and install the most recent patch sets for your Oracle Database 10g version as described in the single-instance installation manual or in Chapter 7, "Oracle Real Application Clusters Post-Installation Procedures". If you are using other Oracle products with your RAC database, then you must also configure them.
You must also perform several post-installation configuration tasks to use certain Oracle Database 10g products such as Sample Schema, Oracle Net Services, or Oracle Messaging Gateway. You must also configure Oracle pre-compilers for your operating system and if desired, configure Oracle Advanced Security.
Use the Companion media to install additional Oracle Database 10g software that may improve performance or extend database capabilities, for example, Oracle JVM, Oracle interMedia or Oracle Text.
See Also:
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information about using RAC scalability features of adding and deleting nodes and instances from RAC databasesOracle Universal Installer (OUI) facilitates the installation of Oracle Clusterware and Oracle Database 10g software. In most cases, you use the graphical user interface (GUI) provided by OUI to install the software. However, you can also use OUI to complete non-interactive (or "silent") installations, without using the GUI. See Appendix B for information about non-interactive installations.
The Oracle Inventory maintains records of Oracle software versions and patches. Each installation has central inventory where the Oracle home is registered. Oracle software installations have a local inventory directory, whose path location is recorded in the central inventory Oracle home. The local inventory directory for each Oracle software installation contains a list of components and applied interim patches associated with that software. Because your Oracle software installation can be corrupted by faulty inventory information, OUI must perform all read and write operations on Oracle inventories. The Oracle Inventory is installed in the path systemdrive
:\program files\oracle
.
When you install Oracle Clusterware or RAC, OUI copies the Oracle software onto the node from which you are running it. If your Oracle home is not on a cluster file system, then OUI propagates the software onto the other nodes that you have selected to be part of your OUI installation session. The Oracle Inventory maintains a list of each node that is a member of the RAC database, and lists the paths to each node's Oracle home. This is used to maintain patches and updates for each member node of the RAC database.
If you create your RAC database using OUI, or if you create it later using DBCA, then Oracle Enterprise Manager Database Control is configured for your cluster database. Database control can manage your cluster database and, for a RAC database, all of its instances.
You can also configure Enterprise Manager Grid Control to manage multiple databases and application servers from a single console. To manage RAC databases in Grid Control, you must a install Grid Control agent on each of the nodes of your cluster. The Agent installation is designed to recognize a cluster environment and install across all cluster nodes; you need to perform the install on only one of the cluster nodes to install Grid Control agent on all cluster nodes.
When OUI installs Oracle software, Oracle recommends that you select a preconfigured database, or use Database Configuration Assistant (DBCA) interactively to create your cluster database. You can also manually create your database as described in procedures posted on the Oracle Technology Network, which is at the following URL:
http://www.oracle.com/technology/index.html
See Also:
Oracle Universal Installer and OPatch User's Guide for more details about OUI
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for information about using Enterprise Manager to administer RAC environments
The Grid Technology Center on the Oracle Technology Network (OTN), which is available at the following URL:
This section discusses storage configuration options that you should consider before installing Oracle Database 10g Release 2 (10.2) with Real Application Clusters. You must prepare storage specific for each phase of the installation and database creation processes.
Oracle recommends using Automatic Storage Management (ASM) or a cluster file system with Oracle Managed Files (OMF) for database storage. This section provides an overview of ASM.
Note that RAC installations using Oracle Database Standard Edition must use ASM for database file storage.
You can use ASM to simplify the administration of Oracle database files. Instead of having to manage potentially thousands of database files, using ASM, you need to manage only a small number of disk groups. A disk group is a set of disk devices that ASM manages as a single logical unit. You can define a particular disk group as the default disk group for a database, and Oracle will automatically allocate storage for, create, or delete, the files associated with the appropriate database object. When administering the database, you need only refer to database objects by name, rather than by file name.
When using ASM with a single Oracle home for database instances on a node, the ASM instance can run from that same home. If you are using ASM with Oracle database instances from multiple database homes on the same node, then Oracle recommends that you run the ASM instance from an Oracle home that is distinct from the database homes. In addition, the ASM home should be installed on every cluster node. Following this recommendation prevents the accidental removal of ASM instances that are in use by databases from other homes during the de-installation of a database's Oracle home.
Benefits of Automatic Storage Management
ASM provides many of the same benefits as storage technologies such as a redundant array of independent disks (RAID) or logical volume managers (LVMs). Like these technologies, ASM enables you to create a single disk group from a collection of individual disk devices. It balances input and output (I/O) loads to the disk group across all of the devices in the disk group. It also implements striping and mirroring to improve I/O performance and data reliability.
However, unlike RAID or LVMs, ASM implements striping and mirroring at the file level. This implementation enables you to specify different storage attributes for individual files in the same disk group.
Disk Groups and Failure Groups
A disk group can include up to 10,000 disk devices. Each disk device can be an individual physical disk, a multiple disk device such as a RAID storage array or logical volume, or even a partition on a physical disk. However, in most cases, disk groups consist of one or more individual physical disks. To enable ASM to balance I/O and storage appropriately within the disk group, all devices in the disk group should have similar, if not identical, storage capacity and performance.
Note:
Do not assign more than one partition on a single physical disk to the same disk group. ASM expects each disk group device to be on a separate physical disk.Although you can specify a logical volume as a device in an ASM disk group, Oracle does not recommend their use. Because logical volume managers can hide the physical disk architecture, ASM may not operate effectively when logical volumes are specified as disk group devices.
When you add a device to a disk group, you can specify a failure group for that device. Failure groups define ASM disks that share a common potential failure mechanism. An example of a failure group is a set of SCSI disks sharing the same SCSI controller. Failure groups are used to determine which ASM disks to use for storing redundant copies of data. For example, if two-way mirroring is specified for a file, ASM automatically stores redundant copies of file extents in separate failure groups.
Redundancy Levels
ASM provides three levels of mirroring, called redundancy levels, that you can specify when creating a disk group. The redundancy levels are:
External redundancy
In disk groups created with external redundancy, the contents of the disk group are not mirrored by ASM. You might choose this redundancy level when:
The disk group contains devices, such as RAID devices, that provide their own data protection
Your use of the database does not require uninterrupted access to data, for example, in a development environment where you have a suitable back-up strategy
Normal redundancy
In disk groups created with normal redundancy, the contents of the disk group are two-way mirrored by default, except the control file, which is three-way mirrored. However, you can choose to create certain files that are not mirrored or that are three-way mirrored in a disk group with normal redundancy. To create a disk group with normal redundancy, you must specify at least two failure groups (a minimum of two devices).
The effective disk space of a disk group that uses normal redundancy is half the total disk space of all of its devices.
High redundancy
In disk groups created with high redundancy, the contents of the disk group all three-way mirrored. To create a disk group with high redundancy, you must specify at least three failure groups (a minimum of three devices).
The effective disk space of a disk group that uses high redundancy is one-third of the total disk space of all of its devices.
ASM and Installation Types
The type and number of disk groups that you can create when installing Oracle software depends on the type of database you choose to create during the installation, as follows:
If you choose to create the default preconfigured database that uses ASM, then OUI prompts you for the disk device names it will use to create a disk group with the default name of DATA.
Advanced database
If you choose to create an advanced database that uses ASM, then you can create one or more disk groups. These disk groups can use one or more devices. For each disk group, you can specify the redundancy level that suits your requirements.
The following table lists the total disk space required in all disk group devices for a typical preconfigured database, depending on the redundancy level you choose to use for the disk group:
Redundancy Level | Total DIsk Space Required |
---|---|
External | 1 GB |
Normal | 2 GB (on a minimum of two devices) |
High | 3 GB (on a minimum of three devices) |
You can also run OUI and to install ASM only without the database and RAC software.
When you configure a database recovery area in a RAC environment, the database recovery area must be on shared storage. When Database Configuration Assistant (DBCA) configures automatic disk backup, it uses a database recovery area that must be shared.
If the database files are stored on a cluster file system, then the recovery area can also be shared through the cluster file system.
If the database files are stored on an Automatic Storage Management (ASM) disk group, then the recovery area can also be shared through ASM.
If the database files are stored on raw devices, then you must use either a cluster file system or ASM for the recovery area.
Note:
ASM disk groups are always valid recovery areas, as are cluster file systems. Recovery area files do not have to be in the same location where datafiles are stored. For instance, you can store datafiles on raw devices, but use ASM for the recovery area.Oracle recommends that you use the following Oracle Database 10g features to simplify RAC database management:
Enterprise Manager—Use Enterprise Manager to administer your entire processing environment, not just the RAC database. Enterprise Manager enables you to manage a RAC database with its instance targets, listener targets, host targets, and a cluster target, as well as ASM targets if you are using ASM storage for your database.
Automatic undo management—Automatically manages undo processing.
Automatic segment-space management—Automatically manages segment freelists and freelist groups.
Locally managed tablespaces—Enhances space management performance.
See Also:
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information about these features in RAC environmentsOracle Database 10g provides single-instance database software and the additional components to operate RAC databases. Some RAC-specific components include:
Oracle Clusterware
A RAC-enabled Oracle home
You must provide OUI with the names of the nodes on which you want to install Oracle Clusterware. The Oracle Clusterware home can be either shared by all nodes, or private to each node, depending on your responses when you run OUI. The home that you select for Oracle Clusterware must be different from the RAC-enabled Oracle home.
When third-party vendor clusterware is present, Oracle Clusterware may interact with the third-party vendor clusterware. For Oracle Database 10g on Windows, Oracle Clusterware coexists with but does not interact with previous Oracle clusterware versions.
Note:
Versions of cluster manager previous to Oracle Database 10g were sometimes referred to as "Cluster Manager". In Oracle Database 10g, this function is performed by a Oracle Clusterware component known as Cluster Synchronization Services (CSS). The OracleCSService, OracleCRService, and OracleEVMService replace the service known previous to Oracle Database 10g as OracleCMService9i.All instances in RAC environments share the control file, server parameter file, redo log files, and all datafiles. These files reside on a shared cluster file system or on shared disks. Either of these types of file configurations are accessed by all the cluster database instances. Each instance also has its own set of redo log files. During failures, shared access to redo log files enables surviving instances to perform recovery.
You can install and operate different versions of Oracle cluster database software on the same computer as described in the following points:
With Oracle Database 10g Release 2 (10.2) if you have a pre-existing Oracle home, then you must install the database into the existing Oracle home. You should install Oracle Clusterware in a separate Oracle Clusterware home. Each node can have only one Oracle Clusterware home.
During installation, Oracle Universal Installer (OUI) prompts you to install additional Oracle Database 10g products if you have not already installed all of them.
OUI also enables you to de-install and re-install Oracle Database 10g Real Application Clusters if needed.
If OUI detects an earlier version of a database, then OUI asks you about your upgrade preferences. You have the option to upgrade one of the previous-version databases with DBUA or to create a new database using DBCA. The information collected during this dialog is passed to DBUA or DBCA after the software is installed.
Note:
Do not move Oracle binaries from the Oracle home to another location. Doing so can cause dynamic link failures.The preferred method to clone Oracle Clusterware and RAC software is to use Enterprise Manager Grid Control. The following sections provide a summary of the command line procedures for deployments of RAC in grid environments with large numbers of nodes using cloned Clusterware and RAC images:
See Also:
For detailed information about cloning RAC and Oracle Clusterware images, refer to the following documents:Cloning, and adding and deleting nodes:
Oracle Universal Installer and OPatch User's Guide
Additional information about adding and deleting nodes:
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide
This section outlines the procedure required to clone an existing Oracle Clusterware home from one node (the source node) to one or more other nodes (the target nodes). The procedure consists of the following tasks:
Ensure that Oracle Clusterware software is installed successfully on the source node. You can use CVU for this task.
As a Windows administrative user, create a zip file of the Oracle Clusterware home directory, selecting the "Save full path info" option.
On a selected target node, create an Oracle Clusterware home directory, and copy the Oracle Clusterware zip file from the source node to the target node Oracle Clusterware home.
As a Windows administrative user, extract the zip file contents, selecting the "Use folder names" option.
Repeat steps 3 and 4 on each of the other target nodes, unless the Oracle Clusterware home is on a shared storage device.
On each of the target nodes, run OUI in clone mode as described in Oracle Universal Installer and OPatch User's Guide.
Complete the post-cloning installation instructions as described in Oracle Universal Installer and OPatch User's Guide.
Complete the following tasks to Clone a RAC database image on multiple nodes:
Ensure that Oracle Database with RAC software is installed successfully on the source node.
Create a zip file of the Oracle home directory, selecting the "Save full path info" option.
On a selected target node, create an Oracle home directory, and copy the Oracle Clusterware zip file from the source node to the target node Oracle Clusterware home.
Extract the zip file contents, selecting the "Use folder names" option.
Repeat steps 3 and 4 on each of the other target nodes, unless the Oracle home is on a shared storage device.
On each of the target nodes, run OUI in clone mode as described in Oracle Universal Installer and OPatch User's Guide.
Complete the post-cloning installation instructions as described in Oracle Universal Installer and OPatch User's Guide.
Run the configuration assistant NetCA on a local node of the cluster, and provide a list when prompted of all nodes that are part of the cluster.
Run the configuration assistant DBCA to create the database.