Skip Headers

Oracle9i Application Server Performance Guide
Release 2 (9.0.2)

Part Number A95102-02
Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Go to previous page Go to next page

5
Optimizing Oracle HTTP Server

This chapter discusses the techniques for optimizing Oracle HTTP Server performance in Oracle9i Application Server.

This chapter contains:

TCP Tuning Parameters (for UNIX)

Correctly tuned TCP parameters can improve performance dramatically. This section contains recommendations for TCP tuning and a brief explanation of each parameter.

Table 5-1 contains recommended TCP parameter settings and includes references to discussions of each parameter.

Table 5-1  Recommended TCP Parameter Settings for Solaris
Parameter Setting Comments

tcp_conn_hash_size

32768

See "Increasing TCP Connection Table Access Speed".

tcp_conn_req_max_q

1024

See "Increasing the Handshake Queue Length".

tcp_conn_req_max_q0

1024

See "Increasing the Handshake Queue Length".

tcp_recv_hiwat

32768

See "Changing the Data Transfer Window Size".

tcp_slow_start_initial

2

See "Changing the Data Transmission Rate".

tcp_close_wait_interval

tcp_time_wait_interval

60000

60000

Parameter name in Solaris release 2.6.

Parameter name in Solaris release 2.7 or later.

See "Specifying Retention Time for Connection Table Entries".

tcp_xmit_hiwat

32768

See "Changing the Data Transfer Window Size".

Table 5-2 TCP Parameter Settings for HP-UX
Parameter Scope Default Value Tuned Value Comments

tcp_time_wait_interval

ndd/dev/tcp

60,000

60,000

See "Specifying Retention Time for Connection Table Entries".

tcp_conn_req_max

ndd/dev/tcp

20

1,024

See "Increasing the Handshake Queue Length".

tcp_ip_abort_interval

ndd/dev/tcp

600,000

60,000

tcp_keepalive_interval

ndd/dev/tcp

7,20,00,000

900,000

tcp_rexmit_interval_initial

ndd/dev/tcp

1,500

1,500

tcp_rexmit_interval_max

ndd/dev/tcp

60,000

60,000

tcp_rexmit_interval_min

ndd/dev/tcp

500

500

tcp_xmit_hiwater_def

ndd/dev/tcp

32,768

32,768

See "Changing the Data Transfer Window Size".

tcp_recv_hiwater_def

ndd/dev/tcp

32,768

32,768

See "Changing the Data Transfer Window Size".

Table 5-3 TCP Parameter Settings for Tru64
Parameter Module Default value Tuned Value Comments

tcbhashsize

sysconfig -r inet

512

16,384

See "Increasing TCP Connection Table Access Speed".

tcbhashnum

sysconfig -r inet

1

16 (as of 5.0)

tcp_keepalive_default

sysconfig -r inet

0

1

tcp_sendspace

sysconfig -r inet

16,384

65,535

tcp_recvspace

sysconfig -r inet

16,384

65,535

somaxconn

sysconfig -r socket

1,024

65,535

sominconn

sysconfig -r socket

0

65,535

sbcompress_threshold

sysconfig -r socket

0

600

Table 5-4 TCP Parameter Settings for AIX
Parameter Model Default Value Recommended Value Comments

rfc1323

/etc/rc.net

0

1

sb_max

/etc/rc.net

65,536

1,31,072

tcp_mssdflt

/etc/rc.net

512

1,024

ipqmaxlen

/etc/rc.net

50

100

tcp_sendspace

/etc/rc.net

16,384

65,536

tcp_recvspace

/etc/rc.net

16,384

65,536

xmt_que_size

/etc/rc.net

30

150

Tuning Linux

Raising Network Limits on Linux Systems for 2.1.100 or greater

Linux only allows you to use 15 bits of the TCP window field. This means that you have to multiply everything by 2, or recompile the kernel without this limitation.

See Also:

Tuning at Compile Time

Tuning a Running System

There is no sysctl application for changing kernel values. You can change the kernel values with an editor like VI.

Tuning the Default and Maximum Size

Edit the files listed below to change kernel values.

Table 5-5 Linux TCP Parameters
Filename Details
/proc/sys/net/core/rmem_default

Default Receive Window

/proc/sys/net/core/rmem_max

Maximum Receive Window

/proc/sys/net/core/wmem_default

Default Send Window

/proc/sys/net/core/wmem_max

Maximum Send Window

You will find some other possibilities to tune TCP in /proc/sys/net/ipv4/:

There is a brief description of TCP parameters in /Documentation/networking/ip-sysctl.txt.

Tuning at Compile Time

All the above TCP parameter values are set default by a header file in the Linux kernel source directory /LINUX-SOURCE-DIR/include/linux/skbuff.h

These values are default. This is run time configurable.

# ifdef CONFIG_SKB_LARGE
#define SK_WMEM_MAX 65535
#define SK_RMEM_MAX 65535
# else 
#define SK_WMEM_MAX 32767
#define SK_RMEM_MAX 32767 
#endif

You can change the MAX-WINDOW value in the Linux kernel source directory /LINUX-SOURCE-DIR/include/net/tcp.h.

#define MAX_WINDOW 32767
#define MIN_WINDOW 2048


Note:

Never assign values greater than 32767 to windows, without using window scaling.


The MIN_WINDOW definition limits you to using only 15bits of the window field in the TCP packet header.

For example, if you use a 40kB window, set the rmem_default to 40kB. The stack will recognize that the value is less than 64 kB, and will not negotiate a winshift. But due to the second check, you will get only 32 kB. So, you need to set the rmem_default value at greater than 64 kB to force a winshift=1. This lets you express the required 40 kB in only 15 bits.

With the tuned TCP stacks, it was possible to get a maximum throughput between 1.5 and 1.8 Mbits via a 2Mbit satellite link, measured with netperf.

Setting TCP Parameters

To set the connection table hash parameter on Solaris, you must add the following line to your /etc/system file, and then restart the system:

set tcp:tcp_conn_hash_size=32768

On Tru64, set tcbhashsize in the /etc/sysconfigtab file.

A sample script, tcpset.sh, that changes TCP parameters to the settings recommended here, is included in the $ORACLE_HOME/Apache/Apache/bin/ directory.


Note:

If your system is restarted after you run the script, the default settings will be restored and you will have to run the script again. To make the settings permanent, enter them in your system startup file.


Increasing TCP Connection Table Access Speed

If you have a large user population, you should increase the hash size for the TCP connection table. The hash size is the number of hash buckets used to store the connection data. If the buckets are very full, it takes more time to find a connection. Increasing the hash size reduces the connection lookup time, but increases memory consumption.

Suppose your system performs 100 connections per second. If you set tcp_close_wait_interval to 60000, then there will be about 6000 entries in your TCP connection table at any time. Increasing your hash size to 2048 or 4096 will improve performance significantly.

On a system servicing 300 connections per second, changing the hash size from the default of 256 to a number close to the number of connection table entries decreases the average round trip time by up to three to four seconds. The maximum hash size is 262144. Ensure that you increase memory as needed.

To set the tcp_conn_hash_size on Solaris, add the line shown below to your /etc/system file. The parameter will take effect when the system is restarted.

set tcp:tcp_conn_hash_size=32768

On Tru64, set tcbhashsize in the /etc/sysconfigtab file.

Specifying Retention Time for Connection Table Entries

As described in the previous section, when a connection is established, the data associated with it is maintained in the TCP connection table. On a busy system, much of TCP performance (and by extension web server performance) is governed by the speed with which the entry for a specific TCP connection can be accessed in the connection table. The access speed depends on the number of entries in the table, and on how the table is structured (for example, its hash size). The number of entries in the table depends both on the rate of incoming requests, and on the lifetime of each connection.

For each connection, the server maintains the TCP connection table entry for some period after the connection is closed so it can identify and properly dispose of any leftover incoming packets from the client. The length of time that a TCP connection table entry will be maintained after the connection is closed can be controlled with the tcp_close_wait_interval parameter (renamed tcp_time_wait_interval on Solaris 2.7). The default in Solaris 2.x for this parameter is 240,000 ms in accordance with the TCP standard. The four minute setting on this parameter is intended to prevent congestion on the Internet due to error packets being sent in response to packets which should be ignored. In practice, 60,000 ms is sufficient, and is considered acceptable. This setting will greatly reduce the number of entries in the TCP connection table while keeping the connection long enough to discard most, if not all, leftover packets associated with it. We therefore suggest you set:

On Solaris 2.6:

/usr/sbin/ndd -set /dev/tcp tcp_close_wait_interval 60000 

On HP-UX and Solaris 2.7 and higher:

/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 60000 


Note:

If your user population is widely dispersed with respect to Internet topology, you may want to set this parameter to a higher value. You can improve access time to the TCP connection table with the tcp_conn_hash_size parameter.


Increasing the Handshake Queue Length

During the TCP connection handshake, the server, after receiving a request from a client, sends a reply, and waits to hear back from the client. The client responds to the server's message and the handshake is complete. Upon receiving the first request from the client, the server makes an entry in the listen queue. After the client responds to the server's message, it is moved to the queue for messages with completed handshakes. This is where it will wait until the server has resources to service it.

The maximum length of the queue for incomplete handshakes is governed by tcp_conn_req_max_q0, which by default is 1024. The maximum length of the queue for requests with completed handshakes is defined by tcp_conn_req_max_q, which by default is 128.

On most web servers, the defaults will be sufficient, but if you have several hundred concurrent users, these settings may be too low. In that case, connections will be dropped in the handshake state because the queues are full. You can determine whether this is a problem on your system by inspecting the values for tcpListenDrop, tcpListenDropQ0, and tcpHalfOpenDrop with netstat -s. If either of the first two values are nonzero, you should increase the maximums.

The defaults are probably sufficient, but Oracle recommends that you increase the value of tcp_conn_req_max_q to 1024. You can set these parameters with:

On Solaris:

% /usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q 1024
% /usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q0 1024

On HP-UX:

prompt>/usr/sbin/ndd-set /dev/tcp tcp_conn_req_max 1024

Changing the Data Transmission Rate

TCP implements a slow start data transfer to prevent overloading a busy segment of the Internet. With slow start, one packet is sent, an acknowledgment is received, then two packets are sent. The number sent to the server continues to be doubled after each acknowledgment, until the TCP transfer window limits are reached.

Unfortunately, some operating systems do not immediately acknowledge the receipt of a single packet during connection initiation. By default, Solaris sends only one packet during connection initiation, per the TCP standard. Thus can increase the connection startup time significantly. We therefore recommend increasing the number of initial packets to two when initiating a data transfer. This can be accomplished using the following command:

% /usr/sbin/ndd -set /dev/tcp tcp_slow_start_initial 2

Changing the Data Transfer Window Size

The size of the TCP transfer windows for sending and receiving data determine how much data can be sent without waiting for an acknowledgment. The default window size is 8192 bytes. Unless your system is memory constrained, these windows should be increased to the maximum size of 32768. This can speed up large data transfers significantly. Use these commands to enlarge the window:

On Solaris:

% /usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 32768
% /usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 32768

On HP-UX:

prompt>/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwater_def 32768
prompt>/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwater_def 32768

Because the client typically receives the bulk of the data, it would help to enlarge the TCP receive windows on end users' systems, as well.

Network Tuning (for Windows)

On Windows systems, there are a number of things to keep in mind when running Oracle HTTP Server.

  1. Be certain that you have sufficient memory. System memory usage can be monitored by watching the display under the performance tab in the Task Manager.

  2. Be certain that only the TPC/IP protocol stack is running. If another protocol is running, it will be listed in the list under the Protocols tab of the Control Panel/Network dialog box. To remove it, select it with the mouse, and click Remove. If you close the Network dialog box, you will be prompted to restart the system. It will be easier, however, to first continue with step 3.

  3. Select the "Maximize Throughput for File Sharing" network optimization scheme. Under the Services tab of the Control Panel/Network dialog box, you can examine the Server properties. Select "Server" in the list, and click Properties. This will bring up a dialog box that allows you to choose the criteria for which TCP will be optimized. The default setting is "Maximize Throughput for File Sharing". We recommend you use the setting. If this has been otherwise set, reset it to the default and click "OK". Then close the Control Panel/Network dialog box. If you changed this setting, you will be prompted to restart the system, and if you have made any changes in Step 1 above, or in this box, you should do so.


    Note:

    The performance is much better when either "Maximize Throughput for File Sharing" or "Maximize Throughput for Network Applications" is chosen, than when either of the other options is chosen. We have also see that the response time under load is cut in half when we maximize for file sharing rather than for network applications.


In addition to the above, one can adjust individual TCP/IP parameters in the registry. We do not recommend that you do so as it is complex. Unless you have plenty of time to test the impact for your environment, we recommend you limit your TCP/IP tuning to the steps above.

Configuring Oracle HTTP Server Directives

Oracle HTTP Server uses directives in httpd.conf to configure the application server. This configuration file specifies the maximum number of HTTP requests that can be processed simultaneously, logging details, and certain timeouts.

Table 5-6 lists directives that may be significant for performance.

Table 5-6  Oracle HTTP Server Configuration Properties
Directive Description

MaxClients

Limit on total number of servers running, that is, limit on the number of clients who can simultaneously connect. If this limit is ever reached, clients are locked out, so it should not be set too low. It is intended mainly as a brake to keep a runaway server from taking the system with it as it spirals down.

MaxRequestsPerChild

The number of requests each child process is allowed to process before the child dies. The child will exit so as to avoid problems after prolonged use when Apache (and maybe the libraries it uses) leak memory or other resources. On most systems, this isn't really needed, but a few (such as Solaris) do have notable leaks in the libraries. For these platforms, set to something like 10000 or so; a setting of 0 means unlimited.

This value does not include KeepAlive requests after the initial request per connection. For example, if a child process handles an initial request and 10 subsequent "keptalive" requests, it would only count as 1 request towards this limit.

MaxSpareServers

MinSpareServers

Server-pool size regulation. Rather than making you guess how many server processes you need, Oracle HTTP Server dynamically adapts to the load it sees, that is, it tries to maintain enough server processes to handle the current load, plus a few spare servers to handle transient load spikes (for example, multiple simultaneous requests from a single Netscape browser).

It does this by periodically checking how many servers are waiting for a request. If there are fewer than MinSpareServers, it creates a new spare. If there are more than MaxSpareServers, some of the spares die off.

The default values are probably ok for most sites.

Default Values:

MaxSpareServers: 10

MinSpareServers: 5

StartServers

Number of servers to start initially should be a reasonable ballpark figure. If you expect a sudden load after restart, set this value based on the number child servers required.

Default Value: 5

Timeout

The number of seconds before incoming receives and outgoing sends time out.

Default Value: 300

KeepAlive

Whether or not to allow persistent connections (more than one request per connection). Set to Off to deactivate.

Default Value: On

MaxKeepAliveRequests

The maximum number of requests to allow during a persistent connection. Set to 0 to allow an unlimited amount.

If you have long client sessions, you might want to increase this value.

Default Value: 100

KeepAliveTimeout

Number of seconds to wait for the next request from the same client on the same connection.

Default Value: 15 seconds

Configuring the MaxClients Directive

The MaxClients directive limits the number of clients that can simultaneously connect to your web server, and thus the number of httpd processes. You can configure this parameter in the httpd.conf file up to a maximum of 8K. If the MaxClients setting is too low, and the limit is reached, clients will be unable to connect.

Tests on a previous release, with static page requests (average size 20K) on a 2 processor, 168 MHz Sun UltraSPARC on a 100 Mbps network showed that:

On the system described above, and on 4 and 6-processor, 336 MHz systems, there was no significant performance improvement in increasing the MaxClients setting from 150 to 256, based on static page and servlet tests with up to 1000 users.

Increasing MaxClients when system resources are saturated does not improve performance. When there are no httpd processes available, connection requests are queued in the TCP/IP system until a process becomes available, and eventually clients terminate connections.

If you are using persistent connections, you may require more concurrent httpd server processes.

For dynamic requests, if the system is heavily loaded, it might be better to allow the requests to queue in the network (thereby keeping the load on the system manageable). The question for the system administrator is whether a timeout error and retry is better than a long response time. In this case, the MaxClients setting could be reduced, to act as a throttle on the number of concurrent requests on the server.

How Persistent Connections Can Reduce httpd Process Availability

There are some serious drawbacks to using persistent connections with Oracle HTTP Server. In particular, because httpd processes are single threaded, one client can keep a process tied up for a significant period of time (the amount of time depends on your KeepAlive settings). If you have a large user population, and you set your KeepAlive limits too high, clients could be turned away because of insufficient httpd deamons.

The default settings for the KeepAlive directives are:

KeepAlive on
MaxKeepAliveRequests 100
KeepAliveTimeOut 15

These settings allow enough requests per connection and time between requests to reap the benefits of the persistent connections, while minimizing the drawbacks. You should consider the size and behavior of your own user population in setting these values on your system. For example, if you have a large user population and the users make small infrequent requests, you may want to reduce the above settings, or even set KeepAlive to off. If you have a small population of users that return to your site frequently, you may want to increase the settings.

Configuring the ThreadsPerChild Parameter (for Windows)

The ThreadsPerChild parameter in the httpd.conf file specifies the number of requests that can be handled concurrently by the HTTP server. Requests in excess of the ThreadsPerChild parameter value wait in the TCP/IP queue. Allowing the requests to wait in the TCP/IP queue often results in the best response time and throughput.

Configuring ThreadsPerChild for Static Page Requests

The more concurrent threads you make available to handle requests, the more requests your server can process. But be aware that with too many threads, under high load, requests will be handled more slowly and the server will consume more system resources.

In in-house tests of static page requests, a setting of 20 ThreadsPerChild per CPU produced good response time and throughput results. For example, if you have four CPUs, set ThreadsPerChild to 80. If, with this setting, CPU utilization does not exceed 85%, you can increase ThreadsPerChild, but ensure that the available threads are in use.

Logging

This section discusses types of logging, log levels, and the performance implications for using logging.

Access Logging

For static page requests, access logging of the default fields results in a 2-3% performance cost.

Configuring the HostNameLookups Directive

By default, the HostNameLookups directive is set to Off. The server writes the IP addresses of incoming requests to the log files. When HostNameLookups is set to on, the server queries the DNS system on the Internet to find the host name associated with the IP address of each request, then writes the host names to the log.

Performance degraded by about 3% (best case) in Oracle in-house tests with HostNameLookups set to on. Depending on the server load and the network connectivity to your DNS server, the performance cost of the DNS lookup could be high. Unless you really need to have host names in your logs in real time, it is best to log IP addresses.

On UNIX systems, you can resolve IP addresses to host names off-line, with the logresolve utility found in the $ORACLE_HOME/Apache/Apache/bin/ directory.

Error logging

The server notes unusual activity in an error log. The ErrorLog and LogLevel directives identify the log file and the level of detail of the messages recorded. The default level is warn. There was no difference in static page performance on a loaded system between the warn, info, and debug levels.

Secure Sockets Layer

The Oracle HTTP Server caches a client's Secure Sockets Layer (SSL) session information by default. With session caching, only the first connection to the server incurs high latency. For example, in a simple test to connect and disconnect to an SSL-enabled server, the elapsed time for 5 connections was 11.4 seconds without SSL session caching. With SSL session caching enabled, the elapsed time for 5 round trips was 1.9 seconds.

The SSLSessionCacheTimeout directive in httpd.conf determines how long the server keeps a session alive (the default is 300 seconds). The session information is kept in a file. You can specify where to keep the session information using the SSLSessionCache directive; the default location is the $ORACLE_HOME/Apache/Apache/logs/ directory or on Windows systems, %ORACLE_HOME%\Apache\Apache\logs\. The file can be used by multiple Oracle HTTP Server processes.

The duration of an SSL session is unrelated to the use of HTTP persistent connections.

Oracle HTTP Server Performance Tips

The following tips can enable you to avoid or debug potential Oracle HTTP Server (OHS) performance problems:

Analyze Static Versus Dynamic Requests

It is important to understand where your server is spending resources so you can focus your tuning efforts in the areas where the most stands to be gained. In configuring your system, it can be useful to know what percentage of your requests are static and what percentage are dynamic. This is because static pages can be cached by Web Cache. Generally speaking, you want to concentrate your tuning effort on dynamic pages because they are normally more costly to generate. Also, by monitoring and tuning your application, you may find that much of the dynamically generated content, such as catalog data, can be cached, sparing significant resource usage.

See Also:

Analyze Time Differences Between Oracle HTTP Server and OC4J Servers

In some cases, you may notice a high discrepancy between the average time to process a request in Oracle9iAS Containers for J2EE (OC4J) and the average response time experienced by the user. If the time is not being spent actually doing the work, then it is probably being spent in transport. If you notice a large discrepancy, please consider the performance guidelines specified in the section, "Configuring Oracle HTTP Server Directives".

Beware of a Single Data Point Yielding Misleading Results

You can get unrepresentative results when data outliers appear. This can sometimes occur at start-up. To simulate a simple example, assume that you ran a PL/SQL "Hello, World" application for about 30 seconds. Examining the results, you can see that the work was all done in mod_plsql.c:

 /ohs_server/ohs_module/mod_plsql.c
   handle.maxTime:     859330
   handle.minTime:      17099
   handle.avg:          19531
   handle.active:           0
   handle.time:      24023499
   handle.completed:     1230

Note that handle.maxTime is much higher than handle.avg for this module. This is probably because it is upon the first request that a database connection must be opened. Later requests can make use of the established connection. To get a better estimate of the average service time for a PL/SQL module, recalculate the average as in the following:

(time - maxTime)/(completed -1)

The values would be:

(24023499 - 859330)/(1230 -1) = 18847.98


Go to previous page Go to next page
Oracle
Copyright © 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index