Oracle9iAS TopLink CMP for Users of BEA WebLogic Guide Release 2 (9.0.3) Part Number B10065-01 |
|
A key feature provided by BEA WebLogic Server is the ability to integrate multiple server instances into what can be viewed by clients as a single server entity referred to as a cluster. Once formed, the cluster will support deployment of EJB's and other J2EE components and provide load-balancing and a measure of failover on those components. For more information on BEA WebLogic Server clustering please consult the BEA WebLogic Server documentation.
This chapter discusses how TopLink for BEA WebLogic may be used within a clustered BEA WebLogic Server environment. There are certain issues that affect how a TopLink application should be configured to ensure that it executes correctly and consistently on a cluster. This chapter discusses those issues, explains the TopLink features that help resolve those issues and offers some best practices for clustered applications.
A BEA WebLogic Server instance is an operating system process in which a single Java Virtual Machine has been invoked passing the BEA WebLogic Server class as the main program argument to the JVM. Server instances may be distributed across multiple host machines or running on the same machine.
When an entity bean is invoked through its remote interface then it must of necessity get loaded into a server instance. Once that bean has been loaded then it is said to be pinned to that server, meaning that all subsequent invocations of business logic on that same remote interface stub instance will be directed to the previously instantiated bean on the server into which it was loaded. This does not preclude the bean instance from being instantiated on other servers by acquiring another remote interface to the bean through the use of a finder or reference from another bean.
When two or more beans get pinned to the same server then they are said to be co-located. Bean co-location implies that optimizations of locality, such as call-by-reference inter-bean invocations, may be employed on the server.
Other terms and concepts that relate to BEA WebLogic Server clustering are further explained in the BEA WebLogic Server documentation.
When using TopLink for BEA WebLogic in a cluster the TopLink run-time jars must be available to all servers in the cluster, or at least all servers in which TopLink CMP beans are deployed. The beans may be deployed on any number or subset of servers in the cluster, with the conditions discussed below in the Static Partitioning section.
Related beans (beans that are associated using a TopLink relationship mapping) require special consideration to provide acceptable performance and correctness. The issues surrounding relating beans are discussed in the Relationships section.
Each server in the cluster manages its own cache independently, with additional capabilities as described in "Cache Synchronization". If cache synchronization is not used then the caches must be manually refreshed.
To define relationships between beans, all of the related beans and objects must be co-located. Source and target objects should also be retrieved on the same server.
Co-location of related beans may be achieved in BEA WebLogic Server 6.1 and 7.0 by making use of one or more of the following observations:
The first point leads to partitioning the beans across the server as a statically-defined means of co-location. The second point introduces the relevance of pinning to co-location. The last two points convey the ideas that using user transactions or session beans can cause the desired co-location to occur.
Beans can be deployed to particular servers only, allowing for static partitioning of beans. Statically partitioning the beans across the cluster provides the required co-location conditions as long as all related beans are deployed in the same server. Other unrelated beans may be deployed in the same or a different server, and depending upon the amount of predicted access traffic could be deployed in more than one server. No application code need be modified. Failover is limited, and load-balancing is statically determined. Cache inconsistency is not an issue in this configuration since beans will only ever get loaded on the server in which they were deployed. These types of systems may suffer from bottlenecking and overhead costs.
Once a bean is created or found it is pinned to a particular server. The BEA WebLogic server will attempt to keep all beans accessed in a given transaction on the same server in an effort to localize the transaction. A transaction cannot be localized if it involves beans that were previously pinned to different servers.
In order to ensure that all beans are local in the transaction each bean used should be re-looked up in the context of that transaction. Beans that were created or found in previous transactions should be discarded.
There are two common methods of using pinning to dynamically co-locate beans: user transactions and using session beans.
Ensuring that all bean invocations are in an enclosing transaction is one way of influencing where beans get instantiated in the cluster. If the beans are deployed in multiple servers then the user transaction may be initiated on any one of the server instances. It doesn't matter which server is chosen since an attempt is made to pin all accessed beans to that server for the duration of the transaction. This way load-balancing can occur while still allowing the co-location demands to be satisfied.
For example, the following code is a portion of a client program that uses a user transaction to co-locate related beans.
UserTransaction transaction = lookupUserTransaction()// Enclose all construction of relationships in the same transaction
transaction.begin();// Look up the home interface and the bean even if they have already been looked up previously
Employee emp = lookupEmployeeHome().findByPrimaryKey(new EmployeePK(EMP_ID)); Address address = new Address(EMP_ID, "99 Bank", "Ottawa", "Ontario", "Canada", "K2P 4A1"); emp.setAddress(address); Project project = lookupProjectHome().findByPrimaryKey(new ProjectPK(PROJ_ID)); emp.addProject(project); transaction.commit();
Entity beans accessed through a session bean are instantiated on the same server as the session bean. By moving the application logic from the client to a session bean, the optimization of locality can be exploited to allow the bean code to run on the same VM. The client need only invoke a single method in the session bean and the bean performs all of the required logic on the same server. If deployed in every server in the cluster, then scalability as well as failover (which must still be handled by the client) can be achieved.
It depends upon the application whether to use session beans, user transactions, or static partitioning of the beans as a means of achieving co-location, since some of these techniques may not be appropriate for certain models. Regardless of the method, co-location is required in order to define relationships between beans.
Another issue that must be considered when running in a clustered configuration is that of cache consistency. Under normal conditions a TopLink session in a BEA WebLogic Server is an independent and autonomous object. Changes made to a bean in one server are not reflected in the caches of other servers. This situation could lead to objects in different states existing in multiple servers and result in phantom reads or updates to stale data (causing previous changes to be lost), amongst other incorrect behavior.
One solution that may make sense if only some objects are required to be fresh is to cause them to be refreshed from the database whenever appropriate. This would involve configuring certain finders to cause refreshing and then invoking these queries when the situation warrants. There are two facets to making a query refresh the object -- setting the refresh policy and the cache usage.
Whenever a query is issued there is a possibility that the result from the database is more recent than the cached version. But by default if objects are already in the cache then they are used instead of the database results, even if the database query was issued. This is a TopLink optimization that reduces the number of objects that have to be built from database results. This feature may be overridden by setting a refresh policy to ensure the objects from the database replace the ones in the cache, if such objects exist there. This way the cache is always updated with the latest copies of the objects from the database whenever a query is completed. Refreshing can be done on each TopLink descriptor or just on certain queries depending upon the nature of the data that can change.
Note: Refreshing does not prevent phantom reads from occurring. See the "Refreshing finder results". |
When a findByPrimaryKey
finder is invoked then the object in the cache is returned if it exists there. The refresh policy is not applied because no database query is issued. In this case, disabling cache hits is required to prevent the finder from using the cached object in case it was deleted or modified by another server. This can be achieved by setting the caching option element to DoNotCheckCache
in the bean deployment descriptor. For more information, see "Caching options".
Cache synchronization automatically causes updates made to one TopLink cache to be propagated to all other server caches. This obviates the need to do manual refreshing, and can provide a consistent view of cached data across the cluster. This feature is enabled by supplying a value for the cache-synchronization
element in the TopLink deployment descriptor.
Cache synchronization is currently supported at the project level. This means that all updates to beans and dependent objects in a given project marked for propagation will be propagated to the caches on all other servers. Propagation at a finer granularity, such as individual beans or objects, may be available in future releases.
Propagation of changes can be configured to function in one of two modes-- synchronous or asynchronous. Users should choose the mode that best meets their requirements. For many applications, synchronous mode is more appropriate as it provides for tighter data consistency models.
Once a transaction has been committed, whether it be through an explicit client commit call or bean method invocation that triggered a transaction begin and subsequent commit, the changes to objects in the transaction must be merged into the TopLink cache. When Cache synchronization is in effect, a remote merge process is also initiated.
Remote merging involves merging the changes into all other remote TopLink caches after the local merge has completed. If problems occur during update propagation it is typically the remote merge process that will be the root cause of such problems. Each server must be able to merge the changes into its local cache and finish up with a consistent version of the object. As mentioned above, the TopLink cache does not begin the merge or update process until the database transaction has already been committed. This is quite beneficial in that it avoids letting uncommitted data into the shared cache, but should be recognized where transactional synchronization is considered. In cases where a merge may have failed there is no way to roll back the changes made to the database (although it is questionable whether this would be a good idea in any case). As a consequence, failures during remote merging can leave the cache in an inconsistent state. This makes it important to handle any errors that occur by performing cache normalization actions, such as resetting the cache, or even the server.
When updates are synchronously propagated, the committing client is blocked until the remote merge process is complete. This provides the client with the assurance that its changes have either successfully reached all remote servers, or that an error occurred and was already handled by its server-side handler. Thus, when the client process gets control, it can invoke another business method and rely upon the receiving server having already incorporated the changes of the clients previous transaction.
Depending upon the requirements of the application, asynchronous operation may be a more efficient approach to updating the distributed caches. When a transaction commits the updates are sent off to the remote servers while the committing client gets control returned to it. Though there are no guarantees as to delivery, errors resulting from merging the updates can be caught and handled by server-side handlers installed by the application, just as in the synchronous case. However, since the client has already been unblocked there is no opportunity to take any action that would affect the calling client, and the client may have already gone on to invoke other business methods on the server cache where the merge failed.
The asynchronous mode of operation is particularly appropriate when freshness time constraints are softer or less of an issue. This would include such applications where it is acceptable to read stale data on some occasions immediately after an update, as long as the cached data gets updated within a "reasonable" period of time.
Cache Synchronization is configured using the toplink-ejb-jar.xml
deployment descriptor (See Chapter 4, "EJB Entity Bean Deployment"). It is invoked using the optional cache-synchronization
element and configured using the a number of optional sub-elements:
When provided, indicates that changes made to one TopLink cache in a cluster should be automatically propagated to all other server caches. The following elements may also be provided:
Following is an example TopLink descriptor that specifies cache synchronization:
<toplink-ejb-jar> <session> <name>ejb20_AccountDemo</name> <project-class> oracle.toplink.demos.ejb20.cmp .account.AccountProject </project-class> <login> <connection-pool>ejbPool </connection-pool> </login> <cache-synchronization> <is-asynchronous>True </is-asynchronous> <should-remove-connection-on-error>True </should-remove-connection-on-error> </cache-synchronization> </session> </toplink-ejb-jar>
To protect objects from being written by more than one client at a time, or to protect against using stale data for updates, optimistic locking causes a check to occur at transaction commit time. This check ensures that no client has gotten in and modified the data since it was last read by the client making the update. An OptimisticLockException exception will be generated and will cause the commit to fail should a stale write be detected. This strategy should be used regardless of whether updates are refreshed or automatically propagated.
Note: Using optimistic locking by itself does not protect against phantom reads or having different copies of the same object existing in multiple nodes. See "Optimistic Locking" in the Oracle9iAS TopLink Mapping Workbench Reference Guide. |
Even when synchronous mode is activated and when optimistic locking is in place this cannot guarantee that all clients will always read the freshest data, because that is not possible without pessimistically locking the data being read. Update propagation is designed to provide a convenient and efficient trade-off that minimizes any optimistic locking conflicts that occur and provides specialized functionality to many clients that have consistency requirements.
When update propagation is in effect the remote merging process causes the number of updates to each cache to be increased substantially, because each cache is updated once for every transaction in the system. The default cache locking policy is set to allow concurrent reading and writing to optimize cache access, but this may be changed to ensure safer cache updates during propagation. To change the cache isolation level to lock the cache during updates, a customization class must be supplied and the cache isolation level can be set on the login. For available isolation options, refer to "Cache Isolation" in the Oracle9iAS TopLink Foundation Library Guide.
afterLoginCustomization(Session session) throws Exception {session.getLogin().setCacheTransactionIsolation(DatabaseLogin.SYNCHRONIZED_READ_ON_WRITE);
|
Copyright © 2002 Oracle Corporation. All Rights Reserved. |
|