Oracle9iAS Containers for J2EE User's Guide Release 2 (9.0.2) Part Number A95880-01 |
|
This chapter discusses concepts of clustering, and provides instructions on how to manage clusters.
It contains the following topics:
A cluster is a set of application server instances configured to act in concert to deliver greater scalability and availability than a single instance can provide. While a single application server instance can only leverage the operating resources of a single host, a cluster can span multiple hosts, distributing application execution over a greater number of CPUs. While a single application server instance is vulnerable to the failure of its host and operating system, a cluster continues to function despite the loss of an operating system or host, hiding any such failure from clients.
Clusters leverage the combined power and reliability of multiple application server instances while maintaining the simplicity of a single application server instance. For example, browser clients of applications running in a cluster interact with the application as if it were running on a single server. The client has no knowledge of whether the application is running on a single application server or in an application server cluster. From the management perspective, an application server administrator can perform operations on a cluster as if the administrator was interacting with a single server. An administrator can deploy an application to an individual server; the application is propagated automatically to all application server instances in the cluster.
The following sections discuss how application server clustering increases scalability, availability, and manageability.
Oracle9iAS clustering enables you to scale your system beyond the limitations of a single application server instance on a single host. Figure 9-1 shows how a cluster unifies multiple application server instances spread over multiple hosts to collectively serve a single group of applications. In this way, clustering makes it possible to serve increasing numbers of concurrent users after the capacity of a single piece of hardware is exhausted.
Clients interact with the cluster as if they are interacting with a single application server. An administrator can add an application server instance to the cluster during operation of the cluster, increasing system capacity without incurring downtime.
Clients access the cluster through a load balancer which hides the application server configuration. The load balancer can send requests to any application server instance in the cluster, as any instance can service any request. An administrator can raise the capacity of the system by introducing additional application server instances to the cluster, each of which derives its configuration from a shared Oracle9iAS Metadata Repository.
Oracle9iAS clustering enables you to achieve a higher level of system availability than that which is possible with only a single application server instance. An application running on a single instance of an application server is dependent on the health of the operating system and host on which the server is running. In this case, the host poses as a single point of failure because if the host goes down, the application becomes unavailable.
An application server cluster eliminates the single point of failure by introducing redundancy and failover into the system. Any application server instance in the cluster can service any client request, and the failure of any single instance or host does not bring down the system. Client session state is replicated throughout the cluster, thereby protecting against the loss of session state in case of process failure. The extent of session state replication is configurable by the administrator.
Figure 9-2 illustrates how application server clusters enable higher availability by providing redundancy and backup and eliminating a single point of failure. Clients access the cluster through a load balancer which can send requests to any application server instance in the cluster. In the case that an application server instance becomes unavailable, the load balancer can continue forwarding requests to the remaining application server instances, as any instance can service any request.
Figure 9-3 demonstrates how managed clustering uses Enterprise Manager. While any clustered system requires all instances to be similarly configured in order to function properly, Oracle9iAS managed clustered instances synchronize their configurations automatically, relieving the administrator of the responsibility to manually update each individual instance. Using Enterprise Manager, the administrator can make configuration changes as if on a single application server instance. Applicable changes are propagated automatically to all instances in the cluster.
Oracle9iAS cluster management simplifies the tasks of creating and administering clusters and reduces the chance of human error corrupting the system. An administrator creates a cluster in a single management operation. Then, the administrator adds the initial application server instance to the cluster to define the base configuration for the cluster. The additional instances automatically inherit this base configuration.
Oracle9iAS clustering applies to the synchronization and management of Oracle HTTP Server (OHS) and Oracle9iAS Containers for J2EE (OC4J) components.
Other Oracle9iAS components, such as Oracle9iAS Web Cache, may support a component-specific clustering model or cluster-like functionality. This is separate from application server clustering and is not discussed in this chapter. Please see the component documentation for further details. For more information about Oracle9iAS Web Cache clustering, see Oracle9iAS Web Cache Administration and Deployment Guide.
This chapter discusses managed application server clusters that offer scalability, availability, and manageability. Managed application server clusters require a metadata repository to stored shared configuration data.
Oracle9iAS also enables you to create non-managed application server clusters that do not require a metadata repository and therefore have no database dependency. Non-managed clusters provide scalability and availability, but not manageability. In a non-managed cluster, it is your responsibility to synchronize the configuration of the application server instances. Figure 9-4 illustrates that a non-managed cluster does not require a database, but you have to configure each application server instance yourself.
If you want to cluster J2EE applications and do not want to use a metadata repository, there are two types of non-managed clusters that you can use:
Create a non-managed application server cluster if you want to use both OHS and OC4J. In a non-managed application server cluster, mod_oc4j
will load-balance requests to all OC4J instances in the cluster.
For more information on non-managed application server clustering, see the Oracle9iAS page on OTN at http://otn.oracle.com/products/ias
.
Create an OC4J-only cluster if you want to use the standalone OC4J that is available for download from OTN. In an OC4J-only cluster, the Java load balancer load-balances requests to all OC4J instances in the cluster. An OC4J-only cluster has a lightweight disk footprint, but the Java load balancer can be a single point of failure.
For more information on OC4J-only clustering, see the OC4J page on OTN at http://otn.oracle.com/tech/java/oc4j
.
A cluster coordinates several application server instances and its components. The roles of the components included in the cluster are described in the following sections:
Figure 9-5 shows the architecture of a farm and a cluster. There are three application server instances, where each instance shares the same Oracle9iAS Metadata Repository within an infrastructure. Thus, all three application server instances are part of the same farm.
Application server instances 1 and 2 are involved in a cluster together. In front of the cluster is a front-end load balancer. Included within each application server instance are its manageability features--Oracle Process Management and Notification (OPMN) and Dynamic Configuration Management (DCM)--and its installed components--Oracle HTTP Server and Oracle9iAS Containers for J2EE (OC4J).
After you have created a cluster, you can add a load balancer in front of all application server instances in the cluster, which provides availability and scalability for the application server instances.
We recommend that you purchase and install a hardware load balancer for the best performance. Alternatively, you could use a Web Cache as a load balancer, which could be a single point of failure. See Oracle9iAS Web Cache Administration and Deployment Guide for instructions on how to set up Web Cache as your load balancer for your cluster.
When you install Oracle9iAS, you have the option of installing the Oracle9iAS Infrastructure. An Oracle9iAS Infrastructure provides Oracle Internet Directory, Oracle9iAS Single Sign-On, and the Oracle9iAS Metadata Repository. The metadata repository is an Oracle9i database that is used to store the application server instance information and configuration. The application server instance tables are created in the metadata repository. Multiple application server instances can share the metadata repository of the infrastructure.
Application server instances associate with an infrastructure either during installation or through the Enterprise Manager after installation.
A farm is a group of multiple application server instances that associate with the same metadata repository. The application server instances that belong to a farm can be installed anywhere on the network.
This chapter does not define what an infrastructure or a farm is. See the Concepts chapter in the Oracle9i Application Server Administrator's Guide for a full description.
Note:
A cluster is a logical group of application server instances that belong to the same farm. Each application server instance may be part of only one cluster. If an instance is part of a cluster, then all of its configured components are implicitly part of that cluster. Each application server instance can only be configured with OHS and OC4J components to be contained in a cluster. A cluster can include zero or more application server instances.
All application server instances involved in the cluster have the same "cluster-wide" configuration. If you modify the configuration on one application server instance, then the modification is automatically propagated across all instances in the cluster.
Note: "Instance-specific" configuration parameter modifications are not propagated. For a description of these parameters, see "Instance-Specific Parameters". |
An application server instance consists of a single Oracle HTTP Server and one or more OC4J instances. It is a single installation in one Oracle home. If you have multiple application servers on a single host, each is installed into its own Oracle home and uses separate port numbers.
To manage clusters from Enterprise Manager, the application server uses a metadata repository for storing its tables and configuration. Each application server instance in the cluster has the same base configuration. The base configuration contains the cluster-wide parameters and excludes instance-specific configuration. If you modify any of the cluster-wide configuration, the modifications are propagated to all other application server instances in the cluster. If you modify an instance-specific parameter, it is not propagated as it is only applicable to the specified application server instance. See "Instance-Specific Parameters" for a listing of the instance-specific parameters. The cluster-wide parameters are all other parameters.
In order for each application server instance to be a part of a cluster, the following must be true:
Oracle9iAS Web Cache provides its own clustering functionality separate from application server clustering. See Oracle9iAS Web Cache Administration and Deployment Guide for more information.
Note:
To cluster application server instances, do the following:
The base configuration includes the cluster-wide properties. It does not include instance-specific properties. See "Instance-Specific Parameters" for more information about instance-specific properties.
Once grouped in the same cluster, these application server instances will have the following properties:
Each application server instance contains management features that manage and monitor the application server instance, its components, and how it performs in a cluster. The management features do the following:
All of these activities are provided by the following management features:
Distributed Configuration Management (DCM) manages configuration by propagating the cluster-wide configuration for the application server instances and its components. When you add the additional application server instances to the cluster, it is the DCM component that automatically replicates the base configuration to all instances in the cluster. When you modify the cluster-wide configuration, DCM propagates the changes to all application server instances in the cluster.
DCM is a management feature in each application server instance. However, it is not a process that exists at all times. DCM is invoked either by Enterprise Manager or manually by a user through dcmctl
to do the following:
You can also manually execute the DCM command-line tool--dcmctl
--to perform these duties. However, there are restrictions on how to use dcmctl
, which are detailed below:
Appendix A, "DCM Command-Line Utility (dcmctl)" for directions on how to do the previous functions with the
See Also:
dcmctl
tool.
Oracle Process Management Notification (OPMN) manages Oracle HTTP Server and OC4J processes within an application server instance. It channels all events from different components to all components interested in receiving them.
OPMN consists of the following two components:
The Oracle Process Manager manages all Oracle HTTP Server and OC4J related processes and is responsible for starting, restarting, shutting down, and detecting the death of any Oracle HTTP Server or OC4J process.
The Oracle Process Manager starts or stops each process according to the characteristics configured in the opmn.xml
configuration file or it waits for a command to start processes from the Enterprise Manager.
The Oracle Notification System is the transport mechanism for failure, recovery, startup, and other related notifications between components in Oracle9iAS. It operates according to a subscriber-publisher model, wherein any component that wishes to receive an event of a certain type subscribes to the Oracle Notification System. When such an event is published, the Oracle Notification System sends it to all subscribers.
All Oracle HTTP Servers know about all active OC4J processes in the cluster. This enables the Oracle HTTP Servers to load balance incoming requests to any of the OC4J processes. This includes the OC4J processes in its own application server instance as well as in other application server instances in the cluster. The Oracle Notification System notifies all Oracle HTTP Servers when any OC4J process is started, dies, restarted, or stopped.
The application server is installed with several different types of components. However, to be involved in a cluster, each application server instance can only contain one Oracle HTTP Server (OHS) and one or more Oracle9iAS Containers for J2EE (OC4J) components. As noted above, Web Cache can be installed, but it will not be clustered within this environment. Web Cache has its own clustering model.
The Oracle HTTP Server (OHS) is a Web server for the application server instance. It serves client requests. In addition, it forwards OC4J requests to an active OC4J process. Because of this, OHS is a natural load balancer for OC4J instances. When you have a single application server instance, the OHS handles the incoming requests for all of the OC4J processes in this sole application server instance. However, in a clustered environment, the OHS is updated with information about existing OC4J processes by OPMN in all application server instances in the cluster. Thus, the OHS can do the following:
OPMN starts (or restarts) each OC4J process. OPMN notifies each Oracle HTTP Server (OHS) in the cluster of each OC4J process. Thus, any OHS can load balance incoming requests among any OC4J process in the cluster.
Figure 9-6 demonstrates how the two Oracle HTTP Servers in the cluster know about both of the OC4J processes. It does not matter that one OC4J process exists in a separate application server instance, which can be installed on a separate host. The OPMN components in each application server instance notifies both Oracle HTTP Servers of the OC4J processes when they were initialized.
The OC4J instance is the entity to which J2EE applications are deployed and configured. It defines how many OC4J processes exist within the application server and the configuration for these OC4J processes. The OC4J process is what executes the J2EE applications for the OC4J instance.
The OC4J instance has the following features:
Within the application sever instance, you can configure multiple OC4J instances, each with its own number of OC4J processes. The advantage for this is for configuration management and application deployment for separate OC4J processes in your cluster.
Figure 9-7 demonstrates the OC4J_home
default OC4J instance. In the context of a cluster, the OC4J instance configuration is part of the cluster-wide configuration. Thus, the OC4J_home
instance, configured on the first application instance, is replicated on all other application server instances.
The number of processes in each OC4J_home
instance is an instance-specific parameter, so you must configure the OC4J_home
instance separately on each application server instance for the number of OC4J processes that exist on each application server instance. Figure 9-7 shows that the OC4J_home
instance on application server instance 1 contains two OC4J processes; the OC4J_home
instance on application server instance 2 contains only one OC4J process. Each OC4J instance defaults to having one OC4J process.
The OC4J process is the JVM process that executes J2EE applications. Each OC4J process is contained in an OC4J instance and inherits its configuration from the OC4J instance. All applications deployed to an OC4J instance are deployed to all OC4J processes in the OC4J instance.
You can define one or more OC4J processes within an OC4J instance, so that J2EE requests can be load balanced and have failover capabilities.
The configuration for the number of OC4J processes is instance-specific. Thus, you must configure each OC4J instance in each application server instance with the number of OC4J processes you want to start up for that OC4J instance. The default is one OC4J process.
Each host that you install the application server instances on has different capabilities. To maximize the hardware capabilities, configure the number of OC4J processes in each OC4J instance that will use these capabilities properly. For example, you can configure a single OC4J process on host A and five OC4J processes on host B.
When you define multiple OC4J processes, you enable the following:
The OC4J processes involved in the cluster can replicate application state to all OC4J processes. Once you configure replication, OC4J handles the propagation of the application state for you.
If one OC4J process fails, then another OC4J process--which has had the application state replicated to it--takes over the application request. When an OC4J process fails during a stateful request, the OHS forwards the request in the following order:
There are two types of failure that you want to protect against: software failure and hardware failure.
An island is a logical grouping of OC4J processes that allows you to determine which OC4J processes will replicate state.
In each OC4J instance, you can have more than one OC4J process. If we consider state replication in a situation where all OC4J processes tried to replicate state, then the CPU load can significantly increase. To avoid a performance degradation, the OC4J instance enables you to subgroup your OC4J processes. The subgroup is called an island.
To ensure that the CPU load is partitioned among the processes, the OC4J processes of an OC4J instance can be partitioned into islands. The state for application requests is replicated only to OC4J processes that are grouped within the same island. All applications are still deployed to all OC4J processes in the OC4J instance. The only difference is that the state for these applications is confined to only a subset of these OC4J processes.
The island configuration is instance-specific. The name of the island must be identical in each OC4J instance, where you want the island to exist. When you configure the number of OC4J processes on each application server instance, you can also subgroup them into separate islands. The OC4J processes are grouped across application server instances by the name of the island. Thus, the application state is replicated to all OC4J processes within the island of the same name spanning application server instances.
The grouping of OC4J processes for the state replication is different for EJB applications than for Web applications. Web applications replicate state within the island sub-grouping. EJB applications replicate state between all OC4J processes in the OC4J instance and do not use the island sub-grouping.
Figure 9-8 demonstrates OC4J processes in islands within the cluster. Two islands are configured in the OC4J_home
instance: default-island
and second-island
. One OC4J process is configured in each island on each application server instance. The OC4J islands, designated within the shaded area, span application server instances.
J2EE applications are deployed in all cases to the OC4J instance--whether the application server instance is included in a cluster or not. However, when the application is deployed to an OC4J instance that is in a cluster, certain configuration details must be accomplished:
Enterprise Manager uses a hierarchical approach for configuring and managing your cluster.
Figure 9-9 demonstrates the configuration tree for a cluster.
The following parameters are not replicated across the cluster.
No matter how many application server instances you add within the cluster, the cluster-wide configuration is replicated within the cluster. You control protecting against software and hardware failure with how you configure island and OC4J processes, which are instance-specific parameters.
Suppose you configure more than one OC4J process within your OC4J instance, then if one of these processes fails, another process can take over the work load of the failed process. Figure 9-10 shows application server instance 1, which is involved in the cluster. Within this application server instance, there are two OC4J processes defined in the default-island in the OC4J_home
instance. If the first OC4J process fails, the other can pick up the work load.
Both of these OC4J processes are on the same host; so, if the host goes down, both OC4J processes fail and the client cannot continue processing.
To protect against hardware failure, you must configure OC4J processes in the same OC4J instance across hosts. Figure 9-11 shows OC4J_home
instance in application server instance 1 and 2. Within the default-island, two OC4J processes are configured on application server instance 1 and three are configured in application server instance 2. If a client is interacting with one of the OC4J processes in application server 1, which terminates abnormally, the client is redirected automatically to one of the OC4J processes in the default-island in application server 2. Thus, your client is protected against hardware failure.
If the client is a stateful application, then the state is replicated only within the same island. In the previous example, there is only a single island, so the state of the application would be preserved.
To enhance your performance, you want to divide up state replication among islands. However, you must also protect for hardware and software failure within these islands.
The optimal method of protecting against software and hardware failure, while maintaining state with the least number of OC4J processes, is to configure at least one OC4J process on more than one host in the same island. For example, if you have application server instance 1 and 2, within the OC4J_home
instance, you configure one OC4J process in the default-island
on each application server instance. Thus, you are protected against hardware and software failure and your client maintains state if either failure occurs.
As demand increases, you will configure more OC4J processes. To guard against a performance slowdown, separate your OC4J processes into separate islands. For example, if fifteen OC4J processes utilize the hardware efficiently on the two hosts and serve the client demand appropriately, then you could divide these processes into at least two islands. The following shows the fifteen OC4J processes grouped into three islands:
Island Names | Application Server 1 | Application Server 2 |
---|---|---|
default-island |
two |
three |
second-island |
two |
three |
third-island |
three |
two |
The following sections describe how to create a cluster and add application server instances to this cluster using Enterprise Manager:
As an alternative to using Enterprise Manager, you can create a cluster, add application server instances to the cluster, and manage the cluster using the DCM command-line tool. See Appendix A, "DCM Command-Line Utility (dcmctl)" for information on the DCM command-line tool.
Note:
From the Oracle9iAS Farm Home Page, you can view a list of all the application server instances that are part of the farm. These application server instances can be clustered.
For more information, see the following topics:
If you have not already done so during installation, you can associate an application server instance with an infrastructure, as follows:
Use the Oracle9iAS Farm Home Page to create a new cluster. The Farm Home Page appears when you open the Enterprise Manager Web site on a host computer that contains an application server instance that is part of a farm.
To create a cluster:
Figure 9-12 shows the Farm Home Page with a single application server instance.
Oracle9iAS displays the Create Cluster page. Figure 9-13 shows this page.
A confirmation message appears.
The new cluster is listed in the Clusters table.
Figure 9-14 shows the Farm Home Page after a cluster is created.
The following sections discuss how you can manage application server instances in a cluster:
To add an application server instance to a cluster:
inst1
application server instance is selected.
test
cluster is selected.
Oracle9iAS adds the application server instance to the selected cluster and then displays a confirmation page.
You will notice that the application server instance disappears from the Standalone Instances section. Also, the number of application server instances displayed for the cluster increases by one. If you display the cluster, you will see that the application server instance was moved into the cluster. Thus, the Standalone Instances section displays only those application server instances that are not a part of any cluster.
Repeat these steps for each additional standalone application server instance you want to add to the cluster.
To remove the application server instance from the cluster, do the following:
When you add or remove an application server instance to or from a cluster, the application server instance is stopped.
The Oracle9iAS Containers for J2EE User's Guide describes how to configure an OC4J Instance. The following sections describe how to configure your OC4J Instance for clustering:
To modify the islands and the number of processes each island contains, do the following:
Figure 9-16 displays the Multiple VM Configuration section.
Configuring state replication for stateful applications is different for Web applications than for EJB applications. To configure state replication for Web applications, do the following:
<distributable/>
tag to all web.xml
files in all Web applications. If the Web application is serializable, you must add this tag to the web.xml
file.
The following shows an example of this tag added to web.xml
:
<web-app> <distributable/> <servlet> ... </servlet> </web-app>
The concepts for understanding how EJB object state is replicated within a cluster are described in the Oracle9iAS Containers for J2EE Enterprise JavaBeans Developer's Guide and Reference. To configure EJB replication, you must do the following:
orion-ejb-jar.xml
file within the JAR file. The type of configuration is dependent on the type of the bean. See "EJB Replication Configuration in the Application JAR" for full details. You can configure these within the orion-ejb-jar.xml
file before deployment or add this through the Enterprise Manager screens after deployment. If you add this after deployment, drill down to the JAR file from the application page.
Modify the orion-ejb-jar.xml
file to add the configuration for stateful session beans and entity beans require for state replication. The following sections offer more details:
You configure the replication type for the stateful session bean within the bean deployment descriptor. Thus, each bean can use a different type of replication.
Set the replication
attribute of the <session-deployment>
tag in the orion-ejb-jar.xml
file to "VMTermination
". This is shown below:
<session-deployment replication="VMTermination" .../>
Set the replication
attribute of the <session-deployment>
tag in the orion-ejb-jar.xml
file to "endOfCall
". This is shown below:
<session-deployment replication="EndOfCall" .../>
No static configuration is necessary when using the stateful session context to replicate information across the clustered hosts. To replicate the desired state, set the information that you want replicated and execute the setAttribute
method within the StatefulSessionContext
class in the server code. This enables you to designate what information is replicated and when it is replicated. The state indicated in the parameters of this method is replicated to all hosts in the cluster that share the same multicast address, username, and password.
Configure the clustering for the entity bean within its bean deployment descriptor.
Modify the orion-ejb-jar.xml
file to add the clustering-schema
attribute to the <entity-deployment>
tag, as follows:
<entity-deployment ... clustering-schema="asynchronous-cache" .../>
In order to participate in Single Sign-On functionality, all Oracle HTTP Server instances in a cluster must have an identical Single Sign-On registration.
As with all cluster-wide configuration, the Single Sign-On configuration is propagated among all Oracle HTTP server instances in the cluster. However, the initial configuration is manually configured and propagated. On one of the application server instances, define the configuration with the ossoreg.jar
tool. Then, DCM propagates the configuration to all other Oracle HTTP Servers in the cluster.
If you do not use a network load balancer, then the Single Sign-on configuration must originate with whatever you use as the incoming load balancer--Web Cache, Oracle HTTP Server, and so on.
To configure a cluster for Single Sign-On, execute the ossoreg.jar
command against one of the application server instances in the cluster. This tool registers the Single Sign-On server and the redirect URLs with all Oracle HTTP Servers in the cluster.
Run the ossoreg.jar
command with all of the options as follows, substituting information for the italicized portions of the parameter values.
The values are described fully in Table 9-1.
success_url
, logout_url
, cancel_url
, and home_url
. These should be HTTP or HTTPS URLs depending on the site security policy regarding SSL access to Single Sign-On protected resources.
u
option.
java -jar ORACLE_HOME/sso/lib/ossoreg.jar -oracle_home_path ORACLE_HOME -host sso_database host_name -port sso_database port_number -sid sso_database SID -site_name site name -success_url http://host.domain:port/osso_login_success -logout_url http://host.domain:port/osso_logout_success -cancel_url http://host.domain:port/ -home_url http://host.domain:port/ -admin_id admin_id -admin_info admin_info -config_mod_osso TRUE -u root -sso_server_version v1.2
The SSORegistrar
tool establishes all information necessary to facilitate secure communication between the Oracle HTTP Servers in the cluster and the Single Sign-On server.
When using Single Sign-On with the Oracle HTTP Servers in the cluster, the KeepAlive directive must be set to OFF. The reason is because the Oracle HTTP Servers are behind a network load balancer. Thus, if the KeepAlive directive is set to ON, then the network load balancer maintains state with the Oracle HTTP Server for the same connection, which results in an HTTP 503 error. Modify the KeepAlive directive in the Oracle HTTP Server configuration. This directive is located in the httpd.conf
file of the Oracle HTTP Server.
The manageability feature of the cluster causes the configuration to be replicated across all application server instances in the cluster, which is defined as a cluster-wide configuration. However, there are certain parameters where it is necessary to configure them separately on each instance. These parameters are referred to as instance-specific.
The following parameters are instance-specific parameters, which are not replicated across the cluster. You must modify these parameters on each application server instance.
The following are instance-specific parameters within each OC4J instance:
All other parameters are part of the cluster-wide parameters, which are replicated across the cluster.
Figure 9-19 shows the section where these parameters are modified. These sections are located in the Server Properties off the OC4J Home Page.
In the Command Line Options section, you can add debugging options to the OC4J Options line. For more information about debugging in the OC4J process, see http://otn.oracle.com/tech/java/oc4j
.
The following are instance-specific parameters in the Oracle HTTP Server.
The HTTP Server ports and listening addresses are modified on the Server Properties page off of the HTTP Server Home Page. The virtual host information is modified by selecting a virtual host from the Virtual Hosts section off of the HTTP Server Home Page.
|
Copyright © 2002 Oracle Corporation. All Rights Reserved. |
|