Skip Headers

Oracle9iAS TopLink Foundation Library Guide
Release 2 (9.0.3)

Part Number B10064-01
Go To Documentation Library
Home
Go To Solution Area
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index

Go to previous page Go to next page

2
Developing Enterprise Applications

An enterprise application is an application that is designed to provide services to a broad range of users across an entire business. This chapter describes how to develop enterprise applications using TopLink, and discusses

This chapter also illustrates some of the TopLink features that enable it to integrate with industry-leading enterprise application servers.

Three-tier and enterprise applications

Three-tier applications are an extension of the client server paradigm that separates an application into three tiers instead of two. These tiers include the client, the application server and the database server. This model allows for application logic to be performed on both the server and client tiers and is scalable to Internet deployment.

An enterprise application is one that integrates multiple heterogeneous systems. An enterprise application may need to integrate with multiple database servers, a legacy application or mainframe application. An enterprise application may also be required to support multiple heterogeneous clients such as RMI, HTML, XML, CORBA, DCOM, or telephony. The three-tier model allows for complex enterprise applications to be built through integrating with other systems in the application server tier. There are many different types of enterprise architectures.

TopLink can be used in any enterprise architecture that makes use of Java. TopLink has direct support for multiple different enterprise architectures and application server features. TopLink is not an application server but provides application server components. TopLink can also be used in a Java client and a Java supporting database server.

TopLink is certified 100% pure Java and can be used in any Java VM including:

Table 2-1 lists the features that TopLink supports for various enterprise architectures. This table can be used to determine the relevant TopLink features for your applications architecture.

Table 2-1 TopLink's features for enterprise architectures  
Architecture TopLink Features

HTML, servlets, JPSs

  • Client Server sessions

  • Session Manager

RMI

  • Client Server sessions

  • Remote sessions

  • Session Manager

CORBA

  • Client Server sessions

  • Remote sessions

  • Session Manager

EJB Session Beans

  • Client Server sessions

  • Remote sessions

  • Session Manager

  • JTS and external connection pooling support

EJB Entity Beans

  • TopLink for Java Foundation Library bean-managed persistence

  • JTS and external connection pooling support

  • TopLink CMP for BEA WebLogic Server

  • TopLink CMP for IBM WebSphere Server

Java Transaction Service (JTS)

  • JTS and external connection pooling support

Multiple databases

  • Session Broker

  • JTS support

Multiple Application Servers (clustering)

  • Distributed cache synchronization

Java supporting databases

  • Oracle9i support

XML

  • TopLink SDK for XML

Enterprise Information System (EIS) access

(non-relational/legacy databases)

  • TopLink SDK for EIS

Client and server sessions

Client and Server sessions provide the ability for multiple clients to share persistent resources. They provide a shared live object cache, read and write connection pooling, parameterized named queries and share descriptor metadata. Client and server sessions should be used in any application server architecture that supports shared memory and is required to support multiple clients.

Both the client and server sessions reside on the server. Clients can communicate through any communication mechanism to the application server. On the application server the client always communicates with a client session that in turn communicates to the database through the server session. Figure 2-1 shows how the client and server sessions are used. Client and server sessions are independent of the communications mechanism and should be used in architectures including HTML, Servlet, JSP, RMI, CORBA, DCOM and EJB.

Figure 2-1 Client and Server session usage

Text description of cliserv.gif follows.

Text description of the illustration cliserv.gif

For a client to read objects from the database, it must acquire a ClientSession from the ServerSession or Server interface. This allows all client sessions to use the same shared object cache of the server session.

For a client to write objects to the database, it must acquire a ClientSession from the ServerSession or Server interface, and then acquire a UnitOfWork within that client session. The unit of work acts as an exclusive transactional object space. The unit of work ensures that any changes committed to the database through the unit of work are reflected in the server session's shared cache.

The server session or Server acts as the session manager for the three-tiered clients. The client session acts as a normal database session that is exclusive to each client or request.

For the most part, the client sessions are not used any differently than a normal TopLink database session. The client session supports all querying protocol that the database session supports.

Client sessions have two restrictions that are required to allow a shared object cache.

Users who have special security access (such as managers accessing salary information) cannot share the same cache as users who do not have access to that information. If multiple security levels exist, then a different server session must be used for each security level. Alternatively, non-shared database sessions could be used for each user with special security access.

Client sessions

A client session represents the dialog of one client with the server. The client session's lifecycle should mirror the lifecycle of the client. In a stateful three-tier model, the client session should exist until the client disconnects from the application server. In a stateless three-tier model, the client session should exist for the duration of one request of a client to the server. The client has exclusive access to the client session and should call the release() method on the client session object when it disconnects from the server. If notification of a disconnect cannot be guaranteed, the application server should time-out the connection to the client and force the client session to be released. If the client session garbage collects, it will automatically release itself.

Client sessions have many of the same properties as normal database sessions, but cannot use the following session properties:

transactions

Client sessions should not explicitly begin transactions, but instead should leverage the TopLink unit of work.

schema creation

Client sessions should not use the SchemaManager.

adding descriptors

Client sessions cannot add descriptors.

write or delete

Client sessions should not explicitly write or delete from the database. The client must acquire a unit of work (see Chapter 6, "Performance Optimization") to be able to modify the database.

Server sessions

The server session manages the client session, shared cache and connection pools. Although the server session is a TopLink session it should only be used to manage the servers client sessions. For this purpose the Server interface is provided. The Server interface does not implement the session API but only the public API required for the server session such as configuring connection pools and acquiring client sessions.

Servers can create new client sessions using the acquireClientSession() method.

Caching database information on the server

The data returned when a client reads an object is automatically cached on the server. This allows all client sessions to share a single cache stored in the server session's identity maps.

Ideally the SoftCacheWeakIdentityMap should be used. This identity map guarantees object identity. Because it uses weak references, it does not in itself impose memory requirements on the server. The SoftCacheWeakIdentityMap is available only if your VM supports the Java 2 API.

If the virtual machine (VM) being used does not implement the Java 2 API, then both the FullIdentityMap and CacheIdentityMap could be used. When using a full identity map, a reference is kept for all of the objects read in by all of the clients even after the reference is no longer needed. This imposes memory requirements on the server.

A possible solution to this problem is for the server system to periodically instruct TopLink to flush the cache. This can be done on a per instance or class basis, or for the identity map as a whole.

Another solution would be to use a cache identity map with a very large cache size. Objects that have been in the cache for a long period of time are eventually discarded. Note that this may lead to a loss of object identity. It is the responsibility of the server application to make sure that this does not occur by removing unnecessary references to objects in memory. Optimistic locking can also be used with a cache identity map to ensure that objects written to the database are not in an invalid state.

Providing client read access

Once the client acquires a client session, it can send read requests to the server. If the server can satisfy the read request with information from its object cache, it returns the information back to the client. If the server cannot satisfy the request with information from its cache, it reads from the database and stores the information in its cache. Subsequent request for that information returns information from the fast object cache instead of performing resource intensive database operations.

This server structure allows for all clients and client sessions to share the same object cache and the same database connection pool for reading. The server should deal with each client request in a separate thread so that the database connection pool can be used concurrently by multiple clients.

Figure 2-2 illustrates how multiple clients can read from the database using the server session.

Figure 2-2 Multiple client sessions reading the database using the server session

Text description of mltiread.gif follows.

Text description of the illustration mltiread.gif

To read objects from the database:
  1. Create a ServerSession object and call login() on it. This should be done only once, when the application server starts.

  2. Acquire a ClientSession from the ServerSession by calling acquireClientSession().

  3. Execute read operations on the ClientSession object.

You should never use the ServerSession object for reading objects from the database.

Providing client write access

When the client wants to write to the database, it must acquire its own object transaction space. This is because the client and server sessions allow all clients to share the same object cache and the same objects (see Figure 2-3).

The client session disables all database modification methods so that objects cannot be written or deleted. The client must obtain a unit of work from the client session to perform database modification.


Caution:

While client sessions are thread-safe, they should not be used to write across multiple threads. Multi-thread writes from the same client session can result in errors being thrown and a loss of data.


The unit of work ensures that objects are edited under a separate object transaction space. This allows clients to perform object transactions in parallel. Once completed, the unit of work performs the changes in the database and then merges all of the changes into the shared TopLink cache in the session to make the changes available to all other users. Refer to Chapter 1, "Working with Database Sessions" for more information on unit of work.

Figure 2-3 Writing with client sessions and server sessions

Text description of uowwrite.gif follows.

Text description of the illustration uowwrite.gif

To write to the database:
  1. Create a ServerSession object and call login() on it (this should be done only once, when the application server starts).

  2. Call acquireClientSession()to acquire a ClientSession object from the ServerSession.

  3. Acquire a UnitOfWork object from the ClientSession object. Refer to Chapter 6 for more information on unit of work.

  4. Perform any updates that are required, then commit the UnitOfWork.

Concurrency

To have concurrent clients logged in at the same time, the server must spawn a dedicated thread of execution for each client. The RMI and CORBA application servers do this automatically. Dedicated threads enable each client to perform its desired work without having to wait for the completion of other clients. TopLink ensures that these threads do not interfere with each other when they make changes to the identity map or perform database transactions.

TopLink addresses thread safety issues by using a concurrency manager for all of its critical components. The concurrency manager ensures that no two threads interfere with each other when altering critical data. Concurrency management is applied to crucial functions that include updating the cache when creating new objects, performing a transaction in the database, and accessing value holders.

Connection pooling

Connection pooling allows for the number of connections used by the server and client sessions to be managed and shared among multiple clients. This reduces the number of connections required by the application server, allowing for a larger number of clients to be supported.

Multiple connections can also be allocated for reading. Although a single connection can support multiple threads reading asynchronously, some JDBC drivers may perform better when multiple connections are allocated. If multiple connections are used for reading, TopLink balances the load across all of the connections using a least-busy algorithm.

By default, TopLink uses a connection pool to manage the connections between client and server sessions:

The default number of connections is fairly low to maintain compatibility with JDBC drivers that do not support many connections. A larger number of connections should be used for both reading and writing if supported by the JDBC driver.

Some JDBC drivers do not support concurrency so may require a thread to have exclusive access to a JDBC connection when reading. The server session should be configured to use exclusive read connection pooling in these cases.

The server session also supports multiple write connection pools and non-pooled connections. If your application server or JDBC driver also supports connection pooling, the server session can be configured to integrate with this connection pooling.

ServerSession connection options

The server session contains a pool of read connections and a pool of write connections that the client session may use. The number and behavior of each can be customized using the following ServerSession methods:

Connection options

TopLink provides a connection policy object that allows the application to customize the way connections are used within a server session object.

ClientSession connection options

There are four ways of getting connections from within a ClientSession object (these correspond to the four acquireClientSession() methods on the ServerSession):

These methods use a lazy database connection by default. This means that the connection is not allocated until a UnitOfWork is committed to the database. If you do not want to use a lazy database connection, but instead require that the database connection be established immediately, you must acquire a ClientSession by passing a ConnectionPolicy object as an argument to acquireClientSession(). This allows you to use any of the three connection options (by setting up the ConnectionPolicy object properly) but also allows you to specify a lazy connection.

Connection policies

The ConnectionPolicy class provides the following methods for configuring a client connection:

Reference

Table 2-2 and Table 2-3 summarize the most common public methods for client and server session:

For a complete description of all available methods for client and server session, see the TopLink JavaDocs.

Table 2-2 Elements for ClientSession  
Element Default Method Names

Executing a query object *

executeQuery(DatabaseQuery query)

Reading from the database *

not applicable

readAllObjects(Class domainClass, Expression expression)
readObject(Class domainClass, Expression expression)

Release *

release()

Unit of work *

not applicable

acquireUnitOfWork()
* Required property
Table 2-3 Elements for ServerSession  
Element Default Method Names

Acquire Client Sessions

not applicable

acquireClientSession()

Logging

no logging

logMessages()

Login / logout

not applicable

login()
logout()

Remote sessions

A remote session is a session that unlike other sessions actually resides on the client and talks to a server session on the server. Remote sessions handle object identity, proxies and the communication between the client and server layer.

Figure 2-4 shows the TopLink client/server split. Much of the application logic runs on the client. The middle, dotted layer is implemented by TopLink and the application interacts with the remote session.

Figure 2-4 A model of a remote session for a three-tier application

Text description of rmtiread.gif follows.

Text description of the illustration rmtiread.gif

The remote session can interact to a database session or a client session (see Figure 2-5). This set-up is done on the server side, by the user. Interaction between the remote session and the database session is not very useful in a distributed environment, because only a single user can interact with the database. However, if the remote session interacts with the client session, then multiple remote sessions can interact with the single database. The remote session can also reap the benefits of connection pooling.

Figure 2-5 The remote session and a database or a client session

Text description of rmtdbses.gif follows.

Text description of the illustration rmtdbses.gif

Architectural overview

The model consists of the following layers (see Figure 2-6):

The request from the client application to the server travels down through the layers of distributed system. A client making a request to the server session actually makes use of the remote session as a conduit to the server session. The client holds reference to a remote session. If necessary, the remote session forwards a request to the server session via the transport and server layer.

Figure 2-6 An architectural overview of the remote session

Text description of remarch.gif follows.

Text description of the illustration remarch.gif

Application layer

The application layer consists of the application and the remote session. The remote session is a subclass of the session. The remote session handles all the public protocols of the session, giving the appearance of working with the local database session.

The remote session maintains its own identity map and a hash table of all the descriptors read from the server. If the remote session is able to handle a request by itself, the request is not passed to the server. For example, a request to read an object that has already been read is processed by the remote session. However, if the object is being read for the first time, the request is passed to the server session.

The remote session interacts to the transport layer through a remote connection.

Transport layer

The transport layer is responsible for carrying the semantics of the invocation. It is a broker-dependant layer that hides all of the broker-related dependencies from the application and server layer.

It consists of a remote connection that is an abstract entity. All the requests to the server are forwarded though the remote connection. Each remote session holds on to a single remote connection. The remote connection marshals and unmarshals all requests and responses on the client side.

In an RMI system, the remote connection interacts with an RMI stub/skeleton layer to talk to the server layer.

Remote session supports communicating over RMI and CORBA. It includes deployment classes and stubs for RMI, WebLogic RMI, VisiBroker, OrbixWeb, WebLogic EJB and Oracle9i EJB.

Server layer

The server layer consists of a remote session controller dispatcher, a remote session controller, and a session. The remote session controller dispatcher marshals and unmarshals all responses and requests from the server side. This is a server side component.

The remote session controller dispatcher is an interface between the session and transport layers. It hides the broker-specific transport layer from the session.

Accessibility issues

The accessibility of the server running on a remote machine is a very sensitive issue because security of the server is very important. In such an environment, registering a remote session controller dispatcher as service can be detrimental as anyone can get access to the service and therefore to the entire database. The recommended set-up is to run some sort of server manager as a service that holds the remote controller session dispatcher. All the clients talk to the server manager and it implements the security model for accessing the remote session controller dispatcher.

On the client side, the user can get access to the server manager as it is a public service running on the server. Once the client gets access to the server manager, it can ask for the remote session controller dispatcher. The manager returns one if it qualifies the security model built into the server manager.

A remote connection is then created using the remote session controller dispatcher on the client side. Once the connection is created, the remote session is acquired from the remote connection. The API for the remote session is same as for the session. For the user, there is no difference between working on a session or a remote session.

The remote session maintains lots of processing behavior so as to minimize its interaction with the server session. It maintains an identity map to preserve the identity of an object. At runtime, the remote session builds its knowledge base by reading descriptors and mappings from the server side only when they are needed. These descriptors and mappings are light-weight because not all of the information is passed on to the remote session. The information needed to traverse an object tree and to extract primary keys from the given object is passed with the mappings and descriptors.

Queries

Only read queries are publicly available on the client side. Object modification is done only through the unit of work.

Refreshing

Normal refreshing calls on the remote session force database hits and possible cache updates provided that the data were previously modified in the database. It could lead to poor performance and may refresh on queries when it is not desired; for example, the server session cache is positively known to be synchronized with the database.

Refresh operations against the server session cache are supported on the remote session. The descriptor can be configured to always remotely refresh the objects in the cache on all queries. This ensures that all queries against the remote session refresh the objects from the server session cache, without the database access.

Cache hits on remote sessions still occur on read object queries based on the primary keys. If these are not desired, the remote session cache hits on read object queries based on the primary key can be disabled.

Example 2-1 Remote session refreshes on the server session cache

// Remote session begin transaction
remoteSession.beginTransaction();

// Get the PolicyHolder descriptor
Descriptor holderDescriptor = remoteSession.getDescriptor(PolicyHolder.class);

// Set refresh on the server session cache
holderDescriptor.alwaysRefreshCachedOnRemote();

// Disable remote cache hits, ensure all queries go to the server session cache
holderDescriptor.disableCacheHitsOnRemote();

Indirection

Indirection objects are supported on the remote session. This is a special kind of value holder that can be invoked remotely on the client side. When invoked, the value holder first checks to see if the requested object exists on the remote session. If not, then the associated value holder on the server is instantiated to get the value that is then passed back to the client. Remote value holders are used automatically; the application's code does not change.

Cursored streams

Cursored streams are supported remotely and are used in the same way as on the server.


Note:

Scrollable cursors are not currently supported for remote sessions.


Unit of work

All object modifications must be done through the unit of work that is acquired from the remote session. For the user, this unit of work is the same as a normal unit of work acquired from the client session or the database session.

Creating a remote connection using RMIConnection

The goal of the following example is to create a remote TopLink session on a client that communicates with a remote session controller on a server using RMI. Once the connection has been created, the client application can use the remote session as it would any other TopLink session.

We will assume we have created an object on the server called RMIServerManager (not part of TopLink). This class has a method that instantiates and returns a RMIRemoteSessionController (a TopLink server side interface).

The following client-side code gets a reference to our RMIServerManager and then uses this to get the RMIRemoteSessionController running on the server. The reference to the session controller is then used in creating our RMIConnection from which we get a remote session.

Example 2-2 Client acquiring RMIRemoteSessionController from Server

RMIServerManager serverManager = null;
// Set the client security manager
try {

   System.setSecurityManager(new RMISecurityManager());
   } catch(Exception exception) {
   System.out.println("Security violation " + exception.toString());

}
// Get the remote factory object from the Registry
try {

   serverManager = (RMIServerManager) Naming.lookup("SERVER-MANAGER");
   } catch (Exception exception) {
   System.out.println("Lookup failed " + exception.toString());

}
// Start RMIRemoteSession on the server and create an RMIConnection
RMIConnection rmiConnection = null;
try {

   rmiConnection = new 
   RMIConnection(serverManager.createRemoteSessionController());
   } catch (RemoteException exception) {
   System.out.println("Error in invocation " + exception.toString());

}
// Create a remote session which we can then use as a normal TopLink Session
Session session = rmiConnection.createRemoteSession();

The following code is used by RMIServerManager to create and return and instance of an RMIRemoteSessionController to the client. The controller sits between the remote client and the local TopLink session.

Example 2-3 Server creating RMIRemoteSessionController for Client

RMIRemoteSessionController controller = null;
try {

   // Create instance of RMIRemoteSessionControllerDispatcher which implements 
   RMIRemoteSessionController. The constructor takes a TopLink session as a 
   parameter. 
   controller = new RMIRemoteSessionControllerDispatcher (localTOPLinkSession);

} 
catch (RemoteException exception) {

   System.out.println("Error in invocation " + exception.toString());

}
return controller;

Session broker

The session broker is the mechanism provided by TopLink for multiple database access. Using the session broker, you can store the objects within an application on multiple databases.

The session broker:

Two-phase/two-stage commits

A two-phase commit is supported through integration with a compliant JTS driver (refer to the section "Java Transaction Service (JTS)" for more details). A true two-phase commit is guaranteed to entirely pass or entirely fail even if a failure occurs during the commit.

If there is no integration with a JTS driver, the broker uses a two-stage commit algorithm. A two-stage commit differs slightly from a two-phase commit. The two-stage commit performed by the session broker is guaranteed except for failure during the final commit of the transaction, after the SQL statement has been successfully executed.

Using the session broker

After the session broker is set up and logged in, it is interacted with just like a session, making the multiple database access transparent. However, creating and configuring a SessionBroker is slightly more involved than creating a regular DatabaseSession.

Before using the SessionBroker, the sessions must be registered with it. To register a session with a SessionBroker, use the registerSession(String name, Session session) method. Before registration, all of the session's descriptors must have already been added to the session but not yet initialized. The sessions should not yet be logged in, as the session broker logs them in.

Example 2-4 Setting up two sessions with a session broker

Project p1 = ProjectReader.read(("C:\Test\Test1.project"));
Project p2 = ProjectReader.read(("C:\Test\Test2.project"));

//modify the user name and password if they are not correct in the 
.project file
p1.getLogin().setUserName("User1");
p1.getLogin().setPassword("password1");
p2.getLogin().setUserName("User2");
p2.getLogin().setPassword("password2");
DatabaseSession session1 = p1.createDatabaseSession();
DatabaseSession session2 = p2.createDatabaseSession();

SessionBroker broker = new SessionBroker();
broker.registerSession("broker1", session1);
broker.registerSession("broker2", session2);

broker.login();

When the login method is performed on the session broker, both sessions are logged in and the descriptors in both sessions are initialized. After login, the session broker is treated like a regular session. TopLink handles the multiple database access transparently.

Example 2-5 Writing to the database using the session broker

UnitOfWork uow = broker.acquireUnitOfWork();
Test test = (Test) broker.readObject(Test.class);
Test testClone = uow.registerObject(test);
. . .
//change and manipulate the clone and any of its references
. . .
uow.commit();

//log out when finished
broker.logout();

Using the session broker in a three-tier architecture

Using the session broker in a three-tier architecture is very similar to the way it is used in two-tier. However, the client sessions must also be registered with a SessionBroker. The ServerSessions are set up in a similar way.

Example 2-6 Setting up the session broker in a three-tier architecture

Project p1 = ProjectReader.read(("C:\Test\Test1.project"))

Project p2 =
ProjectReader.read(("C:\Test\Test2.project"));

Server sSession1 = p1.createServerSession();
Server sSession2 = p2.createServerSession();

SessionBroker broker = new SessionBroker();
broker.registerSession("broker1", sSession1);
broker.registerSession("broker2", sSession2);
broker.login();

A client session can then be acquired from the server session broker, through the acquireClientSessionBroker() method.

Example 2-7 A sample client request code

Session clientBroker = broker.acquireClientSessionBroker();
return clientBroker;

Creating multiple projects in the Mapping Workbench

The session broker is designed to work with a project assigned to each session within the broker. There are a few ways to accomplish this in TopLink, but the following steps show the recommended approach.

  1. Map your entire object model as you would normally in one single project.

  2. Make a copy of the entire project, either by using the Save as menu option or by making a file copy.

  3. In one of the projects, deactivate all the descriptors that do not reside on the database for which this project is being built.


    Note:

    Some of the items may show up as having errors; you can ignore these errors.


  4. Repeat Step 3 for the other project. You should now have one project split into two.


    Note:

    This example assumes that only two projects are used however, the technique is identical for more than two projects.


Limitations

Using the session broker is not the same thing as linking databases at the database level. If your database allows linking, that is the recommended approach to providing multiple database access.

The session broker has the following limitations:

Advanced use

Many-to-many join tables and direct collection tables must be on the same database as the source object, because a join across both databases would be required on a read. However, it is possible to get around this by using the setSessionName(
String sessionName)
method on ManyToManyMapping and DirectCollectionMapping.

This method can be used to tell TopLink that the join table or direct collection table is on the same database as the target table.

Descriptor desc = session1.getDescriptor(Employee.class);
((ManyToManyMapping)desc.getObjectBuilder().getMappingForAttributeName("projects
")).setSessionName("broker2");

A similar method exists on DatabaseQuery that is used mostly for data queries (that is, non-object queries).

Reference

Table 2-4 summarizes the most common public methods for SessionBroker:

For a complete description of all available methods for SessionBroker, see the TopLink JavaDocs.

Table 2-4 Elements for the SessionBroker  
Element Default Method Names

Writing objects

not applicable

acquireUnitOfWork()

Acquiring client sessions

not applicable

acquireClientSessionBroker()

Database connection

not applicable

login()
logout()

Java Transaction Service (JTS)

This section describes how TopLink for Java can be integrated with a transaction service satisfying the Java Transaction Service (JTS) API to participate in distributed transactions.

Review of transactions and transaction management

One of the important properties of databases is that transactions are atomic: a transaction either succeeds completely, or does not take effect at all. We get this automatically from most databases, but problems arise when we need to talk to more than one database at a time.

Consider the situation where we have bank accounts in two different databases. To transfer money from a checking account to a savings account, we want to withdraw money from an account in database A, and deposit it in an account in database B. We can use separate transactions for each database, but if a failure occurs on one database but not the other, then the balances will be incorrect. We need a single, unifying transaction that spans both databases.

Because updating information takes time and there is always a period during the transaction when the information is inconsistent, updating multiple databases may inevitably lead to situations where the information stored is inconsistent. A transaction can be described in more formal terms as a related set of operations with four properties. These are known by the acronym, "ACID":

Atomicity

All operations are considered as a unit, that is either all the operations complete, leaving the information in its consistent amended state (known as committing), or all the operations are undone, leaving the information in its original consistent state (known as rollback).

Constancy

The operations take the information held from one consistent state to another in a predictable fashion.

Isolation

The partially updated states of the information are not visible outside the transaction itself.

Durability

The outcome of the transaction is not reversed (partially or completely) after the transaction is completed.

Distributed transactions

When we described the banking transaction in the "Review of transactions and transaction management" , it was assumed that all of the information necessary to complete the transaction was available locally. However, there are many valid business reasons why information must be stored on different machines. Information may be distributed according to geography. For example, the Sales database may be divided into 'Northern Region' and 'Southern Region'. The information may be divided along departmental lines, with the Accounting department holding billing information while the Stock department holds inventory details. Whatever the reasons for distributing the information, the business user still requires that all of the ACID properties of 'regular' transactions are also true of distributed transactions.

Transaction managers

In a non-distributed transaction, it is up to the single database to ensure the ACID properties of a transaction. In a distributed transaction there has to be careful co-operation between the various resources; thus The Open Group (formerly X/Open) has defined a formal model for Distributed Transaction Processing (DTP) known as the three box model. This model recognizes that there are three distinct components in a distributed transaction.

Figure 2-7 The Open Group DTP processing model for distributed transactions

Text description of opengrp.gif follows.

Text description of the illustration opengrp.gif

The application implements the business logic and does not have direct access to a database. Rather, it interacts with resource managers via a programming interface, typically SQL for relational databases. In addition, the application interacts with a transaction manager to begin and end a transaction. This sets up the transaction context within which all the components operate.

The resource managers have direct access to information and other database-specific resources. Typically a resource manager is a database, but it can be anything that is capable of transactional work (for example, a secure printer). The interface to the resource manager does not reveal any transaction details; rather, the resource manager interacts with the transaction manager to determine the current transaction context.

The transaction manager is dedicated to coordinating the activities of all of the participants in the transaction. It provides the TX interface so that applications can initiate transactional work. It co-ordinates the resource managers via the XA interface. The prime responsibility of the transaction manager is to guide the two-phase commit process that allows outstanding changes held by all the resource managers to be properly written to backing-store.

Two-phase commit with presumed rollback

The two-phase commit with presumed rollback model (2-PC) allows resource managers to make temporary changes during the transaction so that they can be applied at the end of the transaction (committed) or undone (rolled back). During the transaction there is no ambiguity if a failure occurs -- all temporary changes are undone. When the transaction is committed by the application, then the temporary changes are made permanent in two phases.

In phase one, each resource (represented by a resource manager) is told to prepare. At this stage it must store in a secure way the changes it is about to make together with a secure record of its action. If it fails to do this, then it must vote rollback. If it succeeds in securing its records, then it must vote commit and wait for the final decision of the transaction manager. Once a resource has voted to commit, it gives up the right to rollback.

When all resources have voted on the outcome of the transaction or a failure has occurred, the transaction manager decides the final outcome of the transaction.

In phase two, each resource is told to commit. The resources must then make their temporary changes permanent and forget the record of their action made when they voted to commit. Once they have forgotten the secure record of the transaction, they can report done to the transaction manager. When the transaction manager has received done from all participants, it can forget the secure record of the transaction in its turn, and the transaction is complete.

Relationship between OMG Object Transaction Service (OTS) and
Java Transaction Service (JTS)

The OMG Object Transaction Service defines interfaces that allow multiple, distributed objects to provide and participate in distributed ACID transactions. It is upon this specification that the Java Transaction Service (JTS) is based.

JTS transaction synchronization

Transaction synchronization allows interested parties to get notification from the transaction manager about the progress of the commit. For each transaction started, an application may register a javax.transaction.Synchronization callback object that implements the following methods:

The Synchronization interface described can be thought of as a lightweight `listener' to the lifecycle of the global external transaction. It is through this interface that TopLink can participate in a global external transaction by registering a Synchronization callback object for a unit of work.

TopLink unit of work and the synchronization interface

The TopLink Session must be configured with an instance of a class that implements the ExternalTransactionController interface (from package oracle.toplink.sessions).

TopLink includes an external transaction controller for JTS 0.95. This controller is also compatible with JTS and JTA up to and including the JTA 1.0.1 specification. The controllers included with TopLink are found in the oracle.toplink.jts and oracle.toplink.jts.wls packages. These packages include generic JTS Listener and Manager classes, as well as classes that specifically support a number of databases and application servers including

If your JTS driver is not compatible with these versions you can build your own implementor of the ExternalTransactionController interface.

When using the JTS transaction controller, the transaction manager must be set in the JTSSynchronizationListener class. The transaction manager is required to give TopLink access to the global JTS transaction. Unfortunately there is no standard way to access the transaction manager so you must consult your JTS driver documentation to determine how to access this. When using the WebLogic JTS controller this is not required.

Example 2-8 Configuring ExternalTransactionController on the TopLink session

... (appropriate import stmts)
Project project = Project.read("C:\myDir\myProj.project");
// login specifics (database URL, etc) comes from the project
DatabaseLogin login = project.getLogin();
/* set External behaviours: connectionPooling,
Transaction mgmt, Transaction controller.
Must be done before Session is created
*/
login.useExternalTransactionController();
login.useExternalConnectionPooling();
ServerSession session = project.createServerSession();
// The transaction manager must be set
JTSSynchronizationListener.setTransactionManagerjtsTransactionManager);
session.setExternalTransactionController(new 
JTSExternalTransactionController());
...

Writing to a database in three-tier environment

Use a Unit of Work to write to a database that uses JTS externally-controlled transactions. To do this successfully, however, you must ensure that there is only one unit of work associated with a given transaction. To do so, check for a transaction and associated unit of work as follows:

UnitOfWork uow = serverSession.getActiveUnitOfWork(); 

The following logic is executed:

External connection pools and external transaction control

From the example on the previous page, we can see that in addition to providing an ExternalTransactionController for the Session, the DatabaseLogin needs two additional properties configured:

A user acquires a UnitOfWork from the TopLink session using the standard API acquireUnitOfWork(). Within acquireUnitOfWork(), registration of a Synchronization object with the current transaction is delegated to the ETC. If no global external transaction exists, the unit of work begins its own JTS transaction. In this case, if the unit of work is committed it also commits the JTS transaction that it began.

The user manipulates the UnitOfWork in the usual fashion, registering objects and altering clone copies (see "Using units of work" ). At this point, there are two scenarios to consider.

Scenario 1

The user calls uow.commit() before the completion of the global external transaction - that is, neither Synchronization callbacks has yet occurred (see Figure 2-8).

Figure 2-8 External transaction exists when UnitOfWork is called

Text description of ext_tran.gif follows.

Text description of the illustration ext_tran.gif

At uow.commit() time, a flag is set in the UnitOfWork indicating that a merge is pending. In the beforeCompletion callback, the appropriate SQL is sent to the database; if during this operation an OptimisticLockException (or some other RuntimeException) is thrown, the UnitOfWork is marked `dead' and the global external transaction is rolled back using the standard JTS APIs.

If the afterCompletion callback indicates success, the clones are merged with the TopLink Session. If the afterCompletion callback indicates failure (and possibly the beforeCompletion callback is not even invoked), the merge is not done and the UnitOfWork is released.

Scenario 1

No global external transaction exists when the user acquires a unit of work (see Figure 2-9).

Figure 2-9 No external transaction exists when UnitOfWork is called

Text description of nox_tran.gif follows.

Text description of the illustration nox_tran.gif

In this case, the beforeCompletion callback or the afterCompletion callback causes the unit of work to commit and if successful the afterCompletion callback causes the unit of work to merge its changes into the session cache. If the JTS transaction fails or is rolled back, the unit of work is released.

Table 2-5 Public API for JTS  
Class or Interface API
oracle.toplink.sessions.DatabaseSession (I) 

ExternalTransactionController 
getExternalTransactionController() 
setExternalTransactionController( 
    ExternalTransactionController etc)
oracle.toplink.sessions.DatabaseLogin (C) 

dontUseExternalConnectionPooling()
useExternalConnectionPooling()
usesExternalConnectionPooling()
dontUseExternalTransactionController()
useExternalTransactionController() 
usesExternalTransactionController()

Extending TopLink's JTS capabilities

Since the JTS specification is new, vendors have implemented their JTS service against a changing backdrop, the JTS specification itself. To accommodate this, TopLink's JTS integration implementation is flexible to allow for local modifications.

A example of an implementation of a JTS External Transaction Controller is found in the package oracle.toplink.jts. Unfortunately, there needs to be different concrete implementations of the AbstractSynchronizationListener interface because the JTS specification has been changing recently. A vendor-specific implementation suitable for BEA WebLogic's JTS implementation is found in the package oracle.toplink.jts.wls.

In the package oracle.toplink.jts., two abstract classes form the basis of any local modifications:

Table 2-6 Public API for JTS local modifications  
Class or Interface API
oracle.toplink.jts.AbstractExternalTransactionController (A) 

register(UnitOfWork uow, 
  Session session) throws Exception
oracle.toplink.jts.AbstractSynchronizationListener (A) 

rollbackGlobalTransaction()
boolean wasTransactionCommitted(
  int status)

Extensions to TopLink's JTS capabilities thus are always a pair of concrete classes that extend the named classes. A subclass of AbstractExternalTransactionController must implement the abstract methods from Table 2-6.

The register method performs a simple function - it delegates the call; it must invoke the static register method on the specific subclass of AbstractSynchronizationListener that is `paired' with the controller class - for example, the JTSExternalTransactionController implements register as follows:

public void register(UnitOfWork uow, SynchronizationListener sl, Session 
session) throws Exception {

   JTSSynchronizationListener.register(uow, sl, session);

}

A subclass of AbstractSynchronizationListener must implement the two abstract methods from Table 2-6 as well as the static register method mentioned above.

Abstract methods of AbstractSynchronizationListener requiring concrete implementation for local JTS modifications

/** This method must be re-written for the concrete implementations of 
XXXSynchronizationListener as the various revisions of JTS that vendors have 
written their JTS implementations against have different ways of referring 
to/dealing with the 'Transaction' object
*/
public abstract void rollbackGlobalTransaction();
/** Examine the status flag to see if the Transaction committed. This method 
must be re-written for the concrete implementations of 
XXXSynchronizationListener as the various revisions of JTS that vendors have 
written their JTS implementations against have different status codes
 */
public abstract boolean wasTransactionCommited(int status); 

For example, the JTSSynchronizationListener implements register as follows:

Prototypical implementation of register for JTS service

...
import javax.transaction.*;
...
public static void register(UnitOfWork uow, SynchronizationListener sl, Session 
session) throws Exception {

   Transaction tx = tm.getTransaction();
   JTSSynchronizationListener jsl = new JTSSynchronizationListener(uow, 
   sl,session,tx);
   tx.registerSynchronization(jsl);

}

In the previous example implementation, the current global transaction is acquired from tm, a static variable local to JTSSynchronizationListener that must be set to an instance of a class that implements the javax.transaction.TransactionManager interface.

For the abstract methods, the JTSSynchronizationListener implements rollbackGlobalTx and txCommited as follows:

Example 2-9 Example implementation of rollbackGlobalTransaction and wasTransactionCommited for JTS service

public void rollbackGlobalTransaction() {

   try {

      ((Transaction) globalTx).setRollbackOnly();

   }
   catch (SystemException se) {
   }

}
public boolean wasTransactionCommited(int status) {

   if (status == Status.STATUS_COMMITTED)return true;
   else return false;

}

To contrast, the WebLogicJTSSynchronization implements these methods as follows:

Example 2-10 Concrete implementation of register for WebLogic's JTS service

...import weblogic.jts.common.*;
import weblogic.jts.internal.*;
import weblogic.jndi.*;
import javax.naming.*;
import java.util.*;
import java.io.*;
...
public static void register(UnitOfWork uow, Session session) throws

   Exception {
   Context ctx = null;
   Hashtable env = new Hashtable();
   env.put(Context.INITIAL_CONTEXT_FACTORY, WEBLOGIC_FACTORY);
   env.put(Context.PROVIDER_URL,providerUrl);
   // these statics are null by default; check if someone set them
   if (principal != null)env.put(Context.SECURITY_PRINCIPAL,
principal); if (credentials != null) env.put(Context.SECURITY_CREDENTIALS,credentials); if (authentication != null)env.put(Context.SECURITY_AUTHENTICATION,
authentication); ctx = new InitialContext(env); Current current = (Current)ctx.lookup("javax.jts.UserTransaction"); WebLogicJTSSynchronization wjs = new WebLogicJTSSynchronization(uow,session,current); current.getControl().getCoordinator() .registerSynchronization(wjs); } public void rollbackGlobalTransaction() { ((Current) globalTx).setRollbackOnly(); } public boolean wasTransactionCommited(int status) { if (status == Synchronization.COMPLETION_COMMITTED) return true; else return false; }

TopLink support for Java Data Objects (JDO)

TopLink provides an enterprise-proven architecture for the persistence of Java objects and JavaBeans to relational databases, object-relational databases and enterprise information systems. The TopLink architecture and API have evolved through over a decade of development and usage across many vertical markets, countries and applications. Included in this persistence architecture is support for Java Data Objects (JDO).

JDO is an API for transparent database access. The JDO architecture defines a standard API for data contained in local storage systems and heterogeneous enterprise information systems, such as ERP, mainframe transaction processing, and database systems. JDO enables programmers to create code in Java that transparently accesses the underlying data store without using database-specific code.

TopLink provides basic JDO support based on JDO Proposed final draft 1.0 specification (for information on the specification, see the Sun Microsystems web site at java.sun.com.

TopLink's support for JDO includes much of the JDO API, but does not require the class to be enhanced or modified by JDO Reference Enhancer aspects of the JDO specification and other JDO products.


Caution:

JDO is a persistence specification that is in the proposal stage. As such, it may undergo major changes in future editions, or even be abandoned altogether. As a result, a thorough and proper evaluation of JDO compared to other architectures supported by TopLink is strongly recommended before putting a JDO-based architecture into production.


Understanding the JDO API

The JDO API consists of four main interfaces:

PersistenceManagerFactory

A factory that generates PersistenceManagers. It has configuration and login API.

PersistenceManager

The main point of contact from the application. It provides API for accessing the transaction, queries and object life cycle API (makePersistent, makeTransactional, deletePersistent).

Transaction

Defines basic begin, commit, rollback API.

Query

Defines API for configuring the query (filter, ordering, parameters, and variables) and for executing the query.

Figure 2-10 Understanding the JDO API

Text description of jdo_api.gif follows.

Text description of the illustration jdo_api.gif

JDO implementation

TopLink implements the main JDO interfaces PersistenceManagerFactory, PersistenceManager, and Transaction. It extends the query functionality to include the complete TopLink query framework. The supported APIs are listed in the reference tables of their respective implementation class. JDO APIs that are not listed in the reference tables are not supported.

For more information on the TopLink query framework, see "Using the query framework" .

JDOPersistenceManagerFactory

The JDOPersistenceManagerFactory class implements a JDOPersistenceManagerFactory. This factory creates PersistenceManagers.

To create a JDOPersistenceManagerFactory, the constructor takes a session name string or a TopLink session or project. If the factory is constructed from a project, a new DatabaseSession is created and attached to the PersistenceManager every time it is obtained through the getPersistenceManager method.


Note:

DatabaseSession is typically used for single-threaded applications. ServerSession should be used for multi-threaded application.


The PersistenceManager is not multi-threaded. For multi-threaded application, each thread should have its own PersistenceManager. The JDOPersistenceManagerFactory should be constructed from a ServerSession not DatabaseSession or Project to make use of the lighter weight client session and more scalable connection pooling.

Creating a JDOPersistenceManagerFactory

The following code creates a factory from a TopLink session named "jdoSession" that is managed by SessionManager. The SessionManager manages a singleton instance of TopLink ServerSession or DatabaseSession named "jdoSession". Refer to SessionManager documentation for more info.

JDOPersistenceManagerFactory factory= new 
JDOPersistenceManagerFactory("jdoSession");

//Create a persistence manager factory from an instance of TopLink ServerSession or DatabaseSession that is managed by the user.//

ServerSession session = (ServerSession) project.createServerSession();
JDOPersistenceManagerFactory factory= new JDOPersistenceManagerFactory(session);

//Create a persistence manager factory with ties to a DatabaseSession that is created from TopLink project.//

JDOPersistenceManagerFactory factory= new JDOPersistenceManagerFactory(new 
EmployeeProject());
Obtaining PersistenceManager

New PersistenceManagers are created by calling the getPersistentManager method. If the factory is constructed from a Project instance, it can also configure the userid and password using getPersistentManager(String userid, String password).

Reference

Table 2-7 summarizes the most common public methods for PersistenceManagerFactory:

For a complete description of all available methods for PersistenceManagerFactory, see the TopLink JavaDocs.

Table 2-7 Elements for PersistenceManagerFactory  
Element Default Method Names

Construct a factory from a session named "default" that is managed by SessionManager

JDOPersistenceManagerFactory()

Construct a factory from a session name that is managed by SessionManager

JDOPersistenceManagerFactory(String sessionName)

Construct a factory from a user session

JDOPersistenceManagerFactory(Session session)

Construct a factory from a project

JDOPersistenceManagerFactory(Project project)

Query mode that specifies whether cached instances are considered when evaluating the filter expression

false

getIgnoreCache()setIgnoreCache(boolean 
ignoreCache)

Transaction mode that allows instances to be read outside a transaction

true

getNontransactionalRead()
setNontransactionalRead(boolean nontransactionalRead)

These settings are enable only if the factory is constructed from a TopLink Project

user name, password, url, driver from project login

getConnectionUserName()setConnectionUserName(Str
ing userName)getConnectionPassword()
setConnectionPassword(String password)getConnectionURL()
setConnectionURL(String URL)getConnectionDriverName()
setConnectionDriverName(String driverName)

Access PersistenceManager. The user id and password are set only if the factory is constructed from a TopLink Project. Otherwise, use default values.

User id, password from session login or project login

getPersistenceManager()
getPersistenceManager(String userid, String 
password)

Non-configurable properties

getProperties()

Collection of supported option String

supportedOptions()

JDOPersistenceManager

The JDOPersistenceManager class implements a JDOPersistenceManager, the primary interface for JDO-aware application components. The JDOPersistenceManager is the factory for the Query interface and contains methods for accessing transactions and managing the persistent life cycle instances. The JDOPersistenceManager instance can be obtained from JDOPersistenceManagerFactory.

Inserting JDO objects

New JDO objects are made persistent using the makePersistent() or makePersistentAll() methods. If the user does not manually begin the transaction, TopLink will begin and commit the transaction when either makePersistent() or makePersistentAll() is invoked. Note that if the object is already persisted, calling these methods has no effect.

Example 2-11 Persist a new employee named Bob Smith

Server serverSession = new EmployeeProject().createServerSession();
PersistenceManagerFactory factory = new 
JDOPersistenceManagerFactory(serverSession);
PersistenceManager manager = factory.getPersistenceManager();
Employee employee = new Employee();
employee.setFirstName("Bob");
employee.setLastName("Smith");
manager.makePersistent(employee);
Updating JDO Objects

JDO objects are modified using a transactional instance. The object is modified within a transaction context by manually beginning and committing the transaction.

A transactional object is an object that is subject to the transaction boundary. Transactional objects can be obtained several ways, including

The transactional-read query is a query that is executed when the nontransactionalRead flag of the current transaction is false. The current transaction is obtained from the PersistenceManager by calling currentTransaction().

Example 2-12 Update an employee

The following example illustrates how to add a new phone number to an employee object, modify its address and increase its salary by 10%.

Transaction transaction = manager.currentTransaction();
if(!transaction.isActive()) {
transaction.begin();
}
// Get the transactional instance of the employee//
Object id = manager.getTransactionalObjectId(employee);
Employee transactionalEmployee = manager.getObjectById(id, false);
transactionalEmployee.getAddress().setCity("Ottawa");
transactionalEmployee.setSalary((int) (employee.getSalary() * 1.1));
transactionalEmployee.addPhoneNumber(new PhoneNumber("fax", "613", "3213452"));

transaction.commit();
Deleting Persistent Objects

JDO objects are deleted using either deletePersistent() or deletePersistentAll(). The objects can be transactional or non-transactional. If the user does not manually begin the transaction, TopLink will begin and commit the transaction when deletePersistent () or deletePersistentAll () is invoked.

It is important to understand that deleting objects using deletePersistent() or deletePersistentAll() is similar to deleting objects using UnitOfWork. When an object is deleted, its privately-owned parts are also deleted, because privately-owned parts cannot exist without their owner. At commit time, SQL is generated to delete the objects, taking database constraints into account. If an object is deleted, then the object model must take the deletion of that object into account. References to the object being deleted must be set to null or removed from the collection. Modifying references to the object is done through its transactional instance.

Example 2-13 Deleting a team leader from a project

Transaction transaction = manager.currentTransaction();
if(!transaction.isActive()) {
transaction.begin();
}
Object id = manager.getTransactionalObjectId(projectNumber);
Project transactionalProject = (Project) manager.getObjectById(id);
Employee transactionalEmployee = transactionalProject.getTeamLeader();
// Remove team leader from the project//
transactionalProject.setTeamLeader(null);
// Remove owner that is the team leader from phone numbers//
for(Enumeration enum = transactionalEmployee.getPhoneNumbers().elements(); 
enum.hasMoreElements();) {

   ((PhoneNumber) enum.nextElement()).setOwner(null);

}
manager.deletePersistent(transactionalEmployee);
transaction.commit();

Example 2-14 Deleting a Phone Number

Transaction transaction = manager.currentTransaction();
if(!transaction.isActive()) {
transaction.begin();
}
Object id = manager.getTransactionalObjectId(phoneNumber);
PhoneNumber transactionalPhoneNumber = (PhoneNumber) manager.getObjectById(id);
transactionalPhoneNumber.getOwner().getPhoneNumbers().remove(transactionalPhoneN
umber);
manager.deletePersistent(phoneNumber);
transaction.commit();
Obtaining Query

TopLink does not support the JDO Query language but instead includes support within JDO for the more advanced TopLink query framework (for information on the TopLink query framework, see "Using the query framework" ). A key difference is that, while the JDO query language requires results to be returned as a collection of candidate JDO instances (either a java.util.Collection,or anExtent, the result type returned by the TopLink query framework depends on the type of query used. For example, if a ReadAllQuery is used, the result is a Vector.

The query factory is supported through the following APIs.

A ReadAllQuery is created with the Query instance by default.

Reference

Table 2-8 and Table 2-9 summarize the most common public methods for the Query API and TopLink extended API:

For a complete description of all available methods for the Query API and TopLink extended API, see the TopLink JavaDocs.

Table 2-8 Elements for Query API  
Element Default Method Name

Release resource to allow garbage collection

close()

Transaction

currentTransaction()

Delete objects

deletePersistent(Object 
object)deletePersistentAll(Collection 
objects)deletePersistentAll(java.lang.Object
[] objects)

Mark objects as no longer needed in the cache

evict(Object 
object)evictAll()evictAll(Collection 
objects)evictAll(Object[] objects)

Extent

getExtent(Class queryClass, boolean 
readSubclasses)

Cache mode for queries

Ignore cache from the persistence manager factory

getIgnoreCache()setIgnoreCache(boolean 
ignoreCache)

Obtain transactional state of object

getObjectById(Object object, boolean 
validate)getTransactionalObjectId
(Object object)

A PersistenceManager instance can be used until it is closed

isClosed()

Insert objects

makePersistent(Object 
object)makePersistentAll(Collection 
objects)makePersistentAll
(Object[] objects)

Make objects subject to transactional boundaries by registering them to UnitOfWork

makeTransactional(Object 
object)makeTransactionalAll(Collection 
objects)makeTransactionalAll
(Object[] objects)

Query factory

newQuery()newQuery(Class queryClass)

Refreshing objects

refresh(Object object)refreshAll()refreshAll
(Collection objects)refreshAll
(Object[] objects)
Table 2-9 Elements for TopLink extended API  
Element Default Method Name

Obtain transactional object


getTransactionalObject(Object object)

Query factory


newQuery(Class queryClass, Expression 
expression)

Reading objects


readAllObjects(Class 
domainClass)readAllObjects(Class 
domainClass)readObject(Class domainClass, 
Expression expression)

JDOQuery

The JDOQuery class implements the JDO Query interface. It defines API for configuring the query (filter, ordering, parameters, and variables) and for executing the query. TopLink extends the query functionality to include the full TopLink query framework (for information on the TopLink query framework, see "Using the query framework" ). Users can customize the query to use advanced features such as batch reading, stored procedure calls, partial object reading, query by example, and so on. TopLink currently does not support the JDO query language, but users can use either SQL or EJBQL in the JDO Query interface. For more information on EJBQL support, see Chapter 4, "EJBQL Support".

Each JDOQuery instance is associated with a TopLink query. When the JDO Query is obtained from the PersistenceManager by calling a supported newQuery method, a new ReadAllQuery is created and associated with the query. JDO Query can reset its TopLink query to a specific type by calling asReadObjectQuery(), asReadAllQuery(), or asReportQuery.

Customizing the query using the TopLink Query Framework

Much of the TopLink query framework functionality is provided through the public API. In addition, users can build complex functionality into their queries by customizing their own query. Users can create customized a TopLink query and associate it with the JDO Query by calling setQuery().

Using a customized TopLink query gives users the complete functionality of TopLink query framework. An example for using customize query is using a DirectReadQuery with custom SQL to read the id column of the employee.


Note:

TopLink extended APIs are configured for a specific TopLink query type. Exception could be thrown if methods are used with the wrong query type. See Table 2-10 for correct usage.


Example 2-15 Use a ReadAllQuery to read all employees who live in New York

Expression expression = new 
ExpressionBuilder().get("address").get("city").equal("New York");
Query query = manager.newQuery(Employee.class, expression);
Vector employees = (Vector) query.execute();

Example 2-16 Use a ReadObjectQuery to read the employee named Bob Smith

Expression exp1 = new ExpressionBuilder().get("firstName").equal("Bob");
Expression exp2 = new ExpressionBuilder().get("lastName").equal("Smith ");
JDOQuery jdoQuery = (JDOQuery) manager.newQuery(Employee.class);
jdoQuery.asReadObjectQuery();
jdoQuery.setFilter(exp1.and(exp2));
Employee employee = (Employee) jdoQuery.execute();

Example 2-17 Use a ReportQuery to report employee's salary

JDOQuery jdoQuery = (JDOQuery) manager.newQuery(Employee.class);
jdoQuery.asReportQuery();
jdoQuery.addCount();
jdoQuery.addMinimum("min_salary ", 
jdoQuery.getExpressionBuilder().get("salary"));
jdoQuery.addMaximum("max_salary", 
jdoQuery.getExpressionBuilder().get("salary"));
jdoQuery.addAverage("average_salary", 
jdoQuery.getExpressionBuilder().get("salary"));
// Return a vector of one DatabaseRow that contains reported info
Vector reportQueryResults = (Vector) jdoQuery.execute();

Example 2-18 Use a customized DirectReadQuery to read employee 's id column.

DirectReadQuery TopLinkQuery = new DirectReadQuery();
topLinkQuery.setSQLString("SELECT EMP_ID FROM EMPLOYEE");
JDOQuery jdoQuery = (JDOQuery) manager.newQuery();
jdoQuery.setQuery(topLinkQuery);
// Return a Vector of DatabaseRows that contain ids
Vector ids = (Vector)jdoQuery.execute(query);
Reference

Table 2-10 and Table 2-11 summarize the most common public methods for the JDO Query API and TopLink extended API:

For a complete description of all available methods for the JDO Query API and TopLink extended API, see the TopLink JavaDocs.

Table 2-10 Elements for JDO Query API  
Element Default Method Name

Close Cursor result

close(Object queryResult)

Declare query parameters

declareParameters(String parameters)

Execute query

execute()execute(Object arg1)execute(Object 
arg1, Object arg2)execute(Object arg1, 
Object arg2, Object 
arg3)executeWithArray(java.lang.Object[] 
arg1)executeWithMap(Map arg1)

Cache mode for query result

Ignore cache from the persistence manager

getIgnoreCache()setIgnoreCache(boolean 
ignoreCache)

PersistenceManager

getPersistenceManager()

ReadObjectQuery, ReadAllQuery, ReportQuery

setClass(Class queryClass)

ReadAllQuery

setOrdering(String ordering)
Table 2-11 Elements for TopLink Extended API  
Element Default Method Name

Convert query

asReadAllQuery()asReadObjectQuery()
asReportQuery()

Access TopLink query

ReadAllQuery

getQuery()setQuery(DatabaseQuery newQuery)

ReadObjectQuery, ReadAllQuery, ReportQuery

acquireLocks()
acquireLocksWithoutWaiting()
addJoinedAttribute(String attributeName)
addJoinedAttribute(Expression attributeExpression)
addPartialAttribute(String attributeName)
addPartialAttribute(Expression attributeExpression)
checkCacheOnly()
dontAcquireLocks()
dontRefreshIdentityMapResult()
dontRefreshRemoteIdentityMapResult()
getExampleObject()
getExpressionBuilder()
setQueryByExampleFilter(Object exampleObject)
setQueryByExamplePolicy(QueryByExamplePolicy newPolicy)
setShouldRefreshIdentityMapResult
(boolean shouldRefreshIdentityMapResult)
shouldRefreshIdentityMapResult()

ReadObjectQuery

checkCacheByExactPrimaryKey()
checkCacheByPrimaryKey()
checkCacheThenDatabase()
conformResultsInUnitOfWork()
getReadObjectQuery()

ReadAllQuery

addAscendingOrdering(String queryKeyName)
addDescendingOrdering(String queryKeyName)
addOrdering(Expression orderingExpression)
addBatchReadAttribute(String attributeName)
addBatchReadAttribute(Expression attributeExpression)
addStandardDeviation(String itemName)
addStandardDeviation(String itemName, Expression attributeExpression)
addSum(String itemName)
addSum(String itemName, Expression attributeExpression)
addVariance(String itemName)
addVariance(String itemName, Expression attributeExpression)
getReadAllQuery()
useCollectionClass(Class concreteClass)
useCursoredStream()
useCursoredStream(int initialReadSize, int pageSize)
useCursoredStream(int initialReadSize, int pageSize, ValueReadQuery sizeQuery)
useDistinct()useMapClass(Class concreteClass, String methodName)
useScrollableCursor()
useScrollableCursor(int pageSize)

addAttribute(String itemName)
addAttribute(String itemName, Expression attributeExpression)
addAverage(String itemName)
addAverage(String itemName, Expression attributeExpression)
addCount()
addCount(String itemName)
addCount(String itemName, Expression attributeExpression)
addGrouping(String attributeName)
addGrouping(Expression expression)
addItem(String itemName, Expression attributeExpression)
addMaximum(String itemName)
addMaximum(String itemName, Expression attributeExpression)
addMinimum(String itemName)
addMinimum(String itemName, Expression attributeExpression)
getReportQuery()

DatabaseQuery

addArgument(String argumentName)
bindAllParameters()
cacheStatement()
cascadeAllParts()
cascadePrivateParts()
dontBindAllParameters()
dontCacheStatement()
dontCascadeParts()
dontCheckCache()
dontMaintainCache()
dontUseDistinct()
getQueryTimeout()getReferenceClass()
getSelectionCriteria()
refreshIdentityMapResult()
setCall(Call call)
setEJBQLString(String ejbqlString)
setFilter(Expression selectionCriteria)
setQueryTimeout(int queryTimeout)
setSQLString(String sqlString)
setShouldBindAllParameters(boolean shouldBindAllParameters)
setShouldCacheStatement(boolean shouldCacheStatement)
setShouldMaintainCache(boolean shouldMaintainCache)
shouldBindAllParameters()
shouldCacheStatement()
shouldCascadeAllParts()
shouldCascadeParts()
shouldCascadePrivateParts()
shouldMaintainCache()

JDOTransaction

The JDOTransaction class implements the JDO Transaction interface, and defines the basic begin, commit, and rollback APIs, and synchronization callbacks within the UnitOfWork. It supports the optional non-transactional read JDO feature.

Read Modes

The read mode of a JDO transaction is set by calling the setNontransactionalRead() method.


Note:

If the transaction is active when changing read mode, an exception will be thrown.


The read modes are:

Non-Transactional Read

Non-transactional reads provide data from the database, but do not attempt to update the database with any changes made to the data when the transaction is committed. This is the default transaction mode from PersistenceManagerFactory. Non-transactional reads support nested Units of Work.

When queries are executed in non-transactional read mode, their results are not subject to the transactional boundary. To update objects from the queries' results, users must modify objects through their transactional instances.

To enable non-transactional read mode, set the non-transactional read flag to true.

Transactional Read

Transactional reads provide data from the database and writes any changes to the object back to the database when the transactions commits. When transactional read is used, TopLink uses the same UnitOfWork for all data store interaction (begin, commit, rollback). This can cause the cache to grow very large over time, so this mode should be only used with short-lived PersistenceManager instances to allow the UnitOfWork be garbage collected.

When queries are executed in transactional read mode, their results are transactional instances and they are subject to the transactional boundary. Objects can be updated from the result of a query that is executed in transactional mode.

Because the same UnitOfWork is used in this mode, the transaction is always active and must be released when the read mode is changed from transactional read to non-transactional read.


Caution:

It is important to ensure that all changes are committed before calling the TopLink extended API release() to release the transaction and its UnitOfWork and setting the non-transactional read mode to true. Failure to do so can result in a loss of the transaction.


To enable transactional read mode, set the non-transactional read flag to false.

Synchronization

A Synchronization listener can be registered with the transaction to be notified at transaction completion. The beforeCompletion and afterCompletion methods are called when the pre-commit and post-commit events of the UnitOfWork are triggered respectively.

Running the TopLink JDO demo

TopLink includes a demo that illustrates some of the JDO functionality. oracle.toplink.demos.employee.jdo.JDODemo is based on the project oracle.toplink.demos
.employee.relational.EmployeeProject
and is configured to connect to a Microsoft Access database. The database connection code is in the applyLogin() method of the EmployeeProject class. You may have to modify this method if you do not have a Microsoft Access database, or if the connection information for your database is different from what is specified in this code. When the database connection is setup properly, you can start running the JDO demo.

Distributed Cache Synchronization

Within a distributed application environment, the correctness of the data that is available to clients is very important. This issue increases in complexity as the number of servers within an environment increases. To reduce the occurrences of incorrect data ("stale" data) being delivered to clients, TopLink provides a cache synchronization feature. This feature ensures that any client connecting to a cluster of servers is able to retrieve its changes, made through a UnitOfWork, from any other server in the cluster (provided that no changes have been made in the interim).

When enabled in a distributed application, changes made in one transaction on a particular node of the application is broadcast to all other nodes within the distributed application. This prevents stale data from spanning transactions and greatly reduces the chance that a transaction will begin with stale data.

Cache Synchronization in no way eliminates the need for an effective locking policy but does reduce the number of Optimistic lock exceptions and can therefore dramatically decrease the amount of work that must be repeated by the application.

Cache synchronization complements the implemented locking policies and can propagate changes synchronously or asynchronously.

Controlling the sessions: the Cache Synchronization Manager

The Cache Synchronization Manager offers several options for controlling the synchronized sessions:

Table 2-12 Properties for CacheSynchronizationManager  
Use this code fragment... To...
setIsAsynchronous 
(boolean isAsynchronous)

Set propagation mode. See "Synchronous versus asynchronous updates" .

setShouldRemoveConnectionOnError 
(boolean removeConnection)

Drop connections in the event of a communication error. See "Error handling" .

addRemoteConnection 
(RemoteConnection connection)

Add new connections to the synchronized cache. See "Advanced options: Managing connections" .

getRemoteConnections()

Get remote connections. See "Advanced options: Managing connections" .

removeAllRemoteConnections()

`Remove all remote connections from the cache synchronization service. See "Advanced options: Managing connections" .

removeRemoteConnection 
(RemoteConnection connection)

Remove a specific remote connection. See "Advanced options: Managing connections" .

connectToAllRemoteServers()

Connect to all servers participating in cache synchronization. See "Deprecated options: Connecting to all remote servers" .

With the Session properties set, the session automatically connects to all other sessions that are on the same network when the session logs in. Any changes made as a result of the session is then broadcast to all other servers on the same network.

Using Cache Synchronization Manager options

The Cache Synchronization Manager enables you to specify two important functions: the update method for the servers in the caching service, and the error handling method used to control communication errors.

Synchronous versus asynchronous updates

The CacheSynchronizationManager enables you to specify how other sessions are updated when changes are made on a given node:

Error handling

The Cache Synchronization Manager offers very simple error handing: you can set it to drop connections in the event of a communications error.

Advanced options: Managing connections

The Cache Synchronization Manager includes several advanced API options for managing connections. These options are listed in Table 2-12, and enable you to get or add connections, as well as remove specific connections or all connections from a cache synchronization service. Note that these options are considered advanced functionality that is not typically required to run a cache synchronization pool.

Deprecated options: Connecting to all remote servers

The Cache Synchronization manager continues to support the connectToAllRemoteServers functionality. However, this support should be considered only as a service to legacy applications, and not added to new ones.

Using a clustering service

The clustering services for cache synchronization have the following attributes:

Multicast Group IP

The IP used by the sessions for multicast communications. All sessions that share the same Multicast group IP and address will send changes to each other

Multicast Port

Port used for Multicast communication

Time To Live

This is the number of 'hops' that a multicast packet will make on the network before stopping. This has more to do with network configuration than the number of nodes connected

announcementDelay

This setting is used by the ClusteringService to determine how long to wait between making the Remote Service available and announcing its existence. This is required in systems where there is propagation delay when binding the services into JNDI.

Implementing a custom Clustering Service

You can implement a custom Clustering Service to support cache synchronization. This advanced option must respond and be usable as a Foundation Library supplied Clustering Service. Custom Clustering services, include the all of the components of a regular clustering service (multicast group IP, multicast port, and Time to Live Setting).

Configuring cache synchronization

To configure an TopLink Session to use Cache Synchronization, set the Session to use a Cache Synchronization Manager with a particular Clustering Service in the Session properties. This class controls the interaction with other Sessions including accepting changes and connections from Sessions and sending information to all other Sessions.

The Cache Synchronization Manager requires the URL of the naming service. The Clustering Services are organized by communication framework then by naming service. The implementations shipped with the Foundation Library are as follows:

Table 2-13 Cache Synchronization implementations shipped with the Foundation Library  
Name Naming Service Type Framework

RMIClusteringService

RMI registry

RMI

RMIJNDIClusteringService

JNDI

RMI

CORBAJNDIClusteringService

JNDI

One for each of Sun, Orbix and Visibroker corba

JMSClusteringService

JNDI

JMS

If you need to implement your own proprietary communications protocol, consult the RMIRemoteSessionControllerDispatcher and RMIRemoteConnection classes shipped with TopLink.

The Session may also be configured through code.

Example 2-19 Using a simple URL for RMI registry

session.setCacheSynchronizationManager(new 
oracle.toplink.remote.CacheSynchronizationManager () );
// simple URL used for RMI registry
session.getCacheSynchronizationManager().setLocalHostURL("localhost:1099");
session.getCacheSynchronizationManager().setClusteringServiceClassType(oracle.to
plink.remote.rmi.RMIClusteringService.class);

Example 2-20 Setting up clustering with a non-default multicast group

session.setCacheSynchronizationManager( new oracle.toplink.remote. 
CacheSynchronizationManager());
//simple URL used for RMI registry
session.getCacheSynchronizationManager(). setLocalHostURL("localhost:1099");
// Set up Clustering Service with non default multicast group. Note that the 
multicast group must start with 226.x.x.x and can not be 226.0.0.1. The port can 
be any value. Set the same multicast IP and port number for all sessions that 
you wish to synchronize
RMIClusteringService clusteringService = new RMIClusteringService("226.3.4.5", 
3456, session);
session.getCacheSynchronizationManager().setClusteringService(clusteringService)
;

Connecting the sessions

To start the framework for synchronizing the sessions:
  1. Instantiate a CacheSynchronizationManager.

  2. Add this manager to the current ServerSession.

  3. Instantiate a RemoteSessionDispatcher, and add this dispatcher to the CacheSynchronizationManager.

  4. Make the RemoteSessionDispatcher available in a global space such as the RMI registry.

    This session is now able to receive synchronization updates and new connections from other servers.

To connect the servers:
  1. Create a RemoteConnection for the communications framework.

  2. Retrieve the RemoteSessionDispatcher of a session that is to be synchronized with.

  3. Add this RemoteConnection to the CacheSynchronizationManager.

    The current server connects to the owner session of the dispatcher, and adds that server to the list of servers to synchronize with.

    The distributed session is automatically notified of this session's existence, and adds this session to its list of synchronization participants.

Example 2-21 Adding the RemoteSessionDispatcher to the current Session

CacheSynchronizationManager synchManager = new CacheSynchronizationManager();
getSession().setCacheSynchronizationManager(synchManager);
RMIRemoteSessionControllerDispatcher controller = new 
oracle.toplink.remote.rmi.RMIRemoteSessionControllerDispatcher(getSession());
synchManager.setSessionRemoteController(controller);
//Lookup and connect to another Session
RemoteConnection connection = new 
RMIConnection((RMIRemoteSessionController)registry.lookup("Server2"));
//connect to the distributed session and notify that server of this session's 
existence
getSession().getCacheSynchronizationManager().addRemoteConnections(connection);
//Here I am making the current Server available in the Registry your 
implementation of distributing the RemoteSessionDispathcer may differ
registry.rebind("Server1", controller);

Using Java Messaging Service

Java Messaging Service (JMS) is a a specification that provides developers with a pre-implementation and specification of many common messaging protocols. JMS can also be used to build a more scalable cache synchronization implementation.

TopLink integrates with the JMS publish/subscribe mechanism. For more information on this mechanism, consult the JMS specification available on the Sun web site (http:\www.sun.com).

Preparing to use JMS

A JMS service must be set up outside of TopLink before TopLink can leverage the service. To set the service up the developer must

These steps are completed in the software that provides the JMS service. For more information on completing these steps, see the documentation provided with that software.

Setting up JMS in the session configuration file

JMS messaging is typically established in the session configuration file (for example sessions.xml) although it can also be set up in code.

The following example illustrates the use of all JMS options in a typical sessions.xml file:

<cache-synchronization-manager>

   <clustering-service>oracle.toplink.remote.jms
.JMSClusteringService</clustering-service> <should-remove-connection-on-error>false
</should-remove-connection-on-error> <!-- both of the following tags are user specified and must correspond to the settings that the user has made, manually, to the JMS Service --> <jms-topic-connection-factory-name> TopicConectionFactory
</jms-topic-connection-factory-name> <jms-topic-name> TopLinkCacheSynchTopic
</jms-topic-name> <!-both of the following tags may be required if TopLink is not running in the same VM as the JNDI service --> <naming-service-url>t3://localhost:7001
</naming-service-url> <naming-service-initial-context-factory>weblogic.jndi.WLInitialContextFactor y </naming-service-initial-context-factory> </cache-synchronization-manager>

Setting up JMS in Java

JMS support includes new API calls. The following API is required to implement JMS in Java:

public void setTopicConnectionFactoryName(String jndiName);
public void setTopicName(String topicName);

If the JMS is not running on the same virtual machine as the JNDI service, you may also have to include the following:

public void setLocalHostURL(String jndiServiceURL);
public void setInitialContextFactoryName(String initialContextFactoryName);

The following example illustrates a typical implementation of JMS:

this.session.setCacheSynchronizationManager(new 
oracle.toplink.remote.CacheSynchronizationManager());
JMSClusteringService clusteringService = new 
oracle.toplink.remote.jms.JMSClusteringService(this.session);
clusteringService.setLocalHostURL("t3://localhost:7001");
clusteringService.setTopicConnectionFactoryName("TopicConectionFactory");
clusteringService.setTopicName("TopLinkCacheSynchTopic");
this.session.getCacheSynchronizationManager().setClusteringService(clusteringSer
vice);


Go to previous page Go to next page
Oracle
Copyright © 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Solution Area
Solution Area
Go To Table Of Contents
Contents
Go To Index
Index