Oracle9iAS TopLink Foundation Library Guide Release 2 (9.0.3) Part Number B10064-01 |
|
A software development kit (SDK) is a programming package that enables a programmer to develop applications for a specific platform. TopLink provides an SDK for non-relational database access and eXtensible Markup Language (XML) support. This chapter discusses the SDK and includes sections on
The TopLink SDK allows you to extend TopLink to access objects stored on non-relational data stores. To take advantage of the SDK you need to develop a number of classes that can be used by TopLink to access your particular data store. You also need to take advantage of a number of new TopLink Mappings and use many of the customization hooks provided by TopLink that are not used by a typical application that accesses objects stored on a relational database.
There are four major steps to taking advantage of the TopLink SDK:
DatabaseRows
.
DatabaseRows
used by the Calls.
If necessary, TopLink uses your implementation of the interface oracle.toplink.internal.databaseaccess.Accessor
to maintain a "connection" to your non-relational data store. Development of this Accessor is optional - its use is determined by how you decide to gain visibility to a given connection to your data store. For example, instead of using a TopLink Accessor, you could store the connection in a well-known Singleton and have your Calls use that Singleton to gain access to your data store.
If you do not define your own Accessor, the TopLink SDK simply creates an instance of oracle.toplink.sdk.SDKAccessor
and uses it during execution. The SDKAccessor
is an implementation of the Accessor interface. It has little or no implementation behind the protocol required by the Accessor interface.
If you do define your own Accessor, you must implement the Accessor interface. You can accomplish this by subclassing SDKAccessor
and implementing those methods that are supported by your data store, ignoring the others. This is particularly useful if you want TopLink to take advantage of any support for transaction processing offered by your data store.
When logging in, a TopLink Session uses your Accessor to establish a connection to your data store by calling the method connect(DatabaseLogin, Session)
.
The DatabaseLogin
passed in holds a number of settings, including the user id and password set by your application. See the documentation on DatabaseLogin
for information on other settings. Other, user-defined, properties can be stored in the DatabaseLogin
by your application and used by your Accessor to configure its connection.
TopLink occasionally queries the status of your Accessor's connection to your data store by calling the method isConnected()
. This method returns true if the Accessor still has a connection. It is optional whether the Accessor actually "pings" your data store to establish the viability of the connection, as this can cause serious performance degradation.
If your Accessor's connection has timed out or been temporarily disconnected, your application can attempt to reconnect by calling the method reestablishConnection(Session)
. TopLink does not call this method directly - it is called by your application whenever it makes sense for the application to attempt a reconnect.
When logging out, a TopLink Session uses your Accessor to disconnect from your data store by calling the method disconnect(Session)
.
During execution of your application, the TopLink Session holds on to your Accessor and uses it whenever a Call needs to be executed by calling the method executeCall(Call, DatabaseRow, Session)
. Typically, the implementation of this executeCall
method simply logs the activity back to the Session, if necessary, and delegates the actual interaction with the data store to the Call by calling the method Call.execute(DatabaseRow, Accessor)
, passing itself in as a parameter.
If any of your Calls need to be executed together, within the context of a transaction, TopLink indicates to your Accessor that your connection should begin a transaction by calling the method beginTransaction(Session)
. If any Exceptions occur during the execution of the Calls contained within the transaction, TopLink rolls back the transaction by calling rollbackTransaction(Session)
. If all the Calls execute successfully, TopLink commits the transaction by calling commitTransaction(Session)
.
TopLink Calls are the hooks where TopLink calls out to your code for reading and writing your non-relational data. To write a Call for the TopLink SDK, develop a class that implements the interface oracle.toplink.sdk.SDKCall
(which extends the interface oracle.toplink.queryframework.Call
). This requires you to implement a number of methods. Alternatively, you can subclass oracle.toplink.sdk.AbstractSDKCall
and, at least initially, simply implement a single method, execute(DatabaseRow, Accessor)
. Most of your development effort will be concentrated on implementing this method.
The outline about assorted Calls is noticeably lacking in sample code, because the code for Calls will be specific to your particular data store.
If you would like to see an example implementation of these Calls, review the code for the XML Calls in the package oracle.toplink.xml
. These are also discussed in the following section, "Using TopLink XML support" .
At a minimum, you must implement the following calls for every persistent Class that is stored in the non-relational data store:
Depending on the capabilities of your data store, you may need to implement any number of the following custom Calls:
If you want to use TopLink relationship Mappings (for example, oracle.toplink.tools.workbench
or
.OneToOneMappingoracle.toplink.sdk
you must also implement the appropriate Calls for reading the reference object(s) for each of the Mappings.
.SDKObjectCollectionMapping)
If appropriate, any of the calls can be divided into multiple Calls and combined into a single query.
A Read Object Call reads the data required to build a single object for a specified primary key. The DatabaseRow passed into a Read Object Call is populated with values for the primary key fields for the object to be read from the data store. The Call returns a single DatabaseRow for the object specified by the primary key.
A Read All Call reads the data required to build a collection of all the objects (instances) for a particular Class. The DatabaseRow passed into a simple, Class-level Read All Call is empty. The Call returns a collection of all the DatabaseRows for the appropriate Class.
An Insert Call takes a DatabaseRow of the data for a newly created object and inserts it on the appropriate data store. The DatabaseRow passed into an Insert Call contains values for all the mapped fields for the object to be inserted on the data store. The Call returns a count of the number of rows inserted, typically one.
An Update Call takes a DatabaseRow of the data for a recently modified object and writes it to the appropriate data store. The DatabaseRow passed into an Update Call is populated with values for the primary key fields for the object to be updated on the data store. The Call's associated ModifyQuery contains another DatabaseRow that contains values for all the mapped fields for the object to be updated on the data store. The Call returns a count of the number of rows updated, typically one.
A Delete Call deletes the data from the appropriate data store for a specified primary key. The DatabaseRow passed into a Delete Call is populated with values for the primary key fields for the object to be deleted from the data store. The Call returns a count of the number of rows deleted, typically one.
A Does Exist Call simply checks for the existence of data for a specified primary key. This allows TopLink to determine whether an Insert or Update should be performed for that primary key. The DatabaseRow passed into a Does Exist Call is populated with values for the primary key fields for the object to be inserted or updated on the data store. The Call returns a null if the object does not exist on the data store and a DatabaseRow if the object does exist.
A custom Call can be written for any other capabilities of your non-relational data store. Like a normal TopLink Call, a custom Call can be parameterized. Custom Calls can be stored in named queries in the TopLink DatabaseSession or in any TopLink Descriptor. The DatabaseRow passed into a Custom Call is populated with values for the parameters defined for the query.
The Call returns whatever is appropriate for the containing query.
The DatabaseRows that are passed into your Calls and returned by your Calls are like the normal DatabaseRows used by TopLink for relational database activity (these are very similar to hash tables, containing simple key/value pairs), with the additional capability of holding nested DatabaseRows or nested direct values. This allows TopLink to manipulate non-normalized, hierarchical data.
Nested DatabaseRows and direct values are manipulated via a oracle.toplink.sdk.SDKFieldValue
. Within the TopLink SDK, any field in a DatabaseRow can have a value that is an instance of SDKFieldValue
. An SDKFieldValue
can hold one or more nested DatabaseRows or direct values. ("Direct values" are objects that do not have TopLink Descriptors and are typically placed directly into the containing object without any mapping - for example, Strings, Dates, Numbers.) It can also have a data type name indicating the "type" of elements held in the nested collection. Whether this data type name is required is determined by the data store's requirements for nested data elements.
Nested DatabaseRows, themselves, can also contain nested DatabaseRows, and so on. There is no limit to the nesting.
There may be times when the names of fields expected by your TopLink Descriptors and DatabaseMappings are different from those generated by your data store, and vice versa. This is particularly true when dealing with aggregate objects. The aggregate Descriptor is defined in terms of a single set of field names. But a number of different AggregateMappings may reference the same aggregate Descriptor, each expecting a different set of field names for the aggregate data. If this is the case, and you are subclassing oracle.toplink.sdk.AbstractSDKCall
, then you can take advantage of the SDK FieldTranslator to handle this situation. If you are not subclassing AbstractSDKCall, you can still take advantage of the SDK FieldTranslators by building them into your own Calls. Alternatively, you can create your own mechanism for translating field names between TopLink and your data store on a per-Call basis.
The interface oracle.toplink.sdk.FieldTranslator
defines a simple read and write protocol for translating the field names in a DatabaseRow. The default implementation of this interface, appropriately named oracle.toplink.sdk.DefaultFieldTranslator
, performs no translations at all.
Another implementation, oracle.toplink.sdk.SimpleFieldTranslator
, provides a mechanism for translating the field names in a DatabaseRow, either before the row is written to the data store or after the row is read from the data store. SimpleFieldTranslator also allows for wrapping another FieldTranslator and having the read and write translations processed by the wrapped FieldTranslator also.
A SimpleFieldTranslator also translates the field names of any nested DatabaseRows contained in SDKFieldValues.
Building a SimpleFieldTranslator is straightforward.
/* Add translations for the first and last name field names. F_NAME on the data store will be converted to FIRST_NAME for TopLink, and vice versa. Likewise for L_NAME and LAST_NAME.
*/
AbstractSDKCall call = new EmployeeCall(); SimpleFieldTranslator translator = new SimpleFieldTranslator(); translator.addReadTranslation("F_NAME", "FIRST_NAME"); translator.addReadTranslation("L_NAME", "LAST_NAME"); call.setFieldTranslator(translator);
AbstractSDKCall has some convenience methods that allow you to perform the same operation, without building your own translator.
AbstractSDKCall call = new EmployeeCall();call.addReadTranslation("F_NAME", "FIRST_NAME"); call.addReadTranslation("L_NAME", "LAST_NAME");
If your Calls are all subclasses of AbstractSDKCall, you can take advantage of the convenience methods in SDKDescriptor that sets the same field translations for all the Calls in the DescriptorQueryManager.
descriptor.addReadTranslation("F_NAME", "FIRST_NAME"); descriptor.addReadTranslation("L_NAME", "LAST_NAME");
If your Call should encounter a problem while accessing your non-relational data store, it should throw a oracle.toplink.sdk.SDKDataStoreException
(or a subclass of your own creation). This Exception has state for holding an error code, a Session, an internal Exception, a DatabaseQuery, and an Accessor. An Exception handler can use this state to recover from the thrown Exception or to provide useful information to the user or developer concerning the cause of the Exception.
Once you have developed your Calls, you can use them to define the Descriptors and Mappings that TopLink will use to read and write your objects. Instead of using the normal TopLink Descriptors, you will need to use a subclass of Descriptor, oracle.toplink.sdk.SDKDescriptor
, that provides support for the new mappings supplied by the SDK. Along with the new Mappings that allow non-normalized data to be accessed, most of the typical TopLink Mappings are supported by the SDK.
The TopLink SDK supports most of the properties of the standard Descriptor:
For more information on other supported and unsupported properties, see "Other supported properties" and "Unsupported properties" .
The code needed to build a basic SDKDescriptor
is nearly identical to that used to build a normal Descriptor.
SDKDescriptor descriptor = new SDKDescriptor(); descriptor.setJavaClass(Employee.class); descriptor.setTableName("employee"); descriptor.setPrimaryKeyFieldName("id");
The Java class is required. The table name is usually required. Whether you use and/or allow multiple table names will be determined by how the data is stored on your data store and translated by your Calls. It is probably easiest to map the Descriptor to a single table and use your Calls to merge together the data that might be spread across multiple tables into a single table (this would be somewhat analogous to a relational "view"). The primary key field name is also required - it is used by TopLink to maintain object identity.
The major difference between building an SDKDescriptor
and building a standard Descriptor is that you need to define all the custom Queries for the Descriptor's QueryManager. Typically, to do this, you would build a TopLink DatabaseQuery and put it in the Descriptor's QueryManager.
ReadObjectQuery query = new ReadObjectQuery(); query.setCall(new EmployeeReadCall()); descriptor.getQueryManager().setReadObjectQuery
(query);
But SDKDescriptor has a number of convenience methods that simplify setting all these Calls.
descriptor.setReadObjectCall(new EmployeeReadCall()); descriptor.setReadAllCall(new EmployeeReadAllCall()); descriptor.setInsertCall(new EmployeeInsertCall()); descriptor.setUpdateCall(new EmployeeUpdateCall()); descriptor.setDeleteCall(new EmployeeDeleteCall()); descriptor.setDoesExistCall(new EmployeeDoesExistCall());
These Calls are instances of the Calls described in the section "Calls" . In addition to the standard CRUD (Create-Read-Update-Delete) operations represented by these Calls, you can also add custom Calls to an SDKDescriptor that allow your application to query your data store using selection criteria that can be set dynamically.
descriptor.addReadAllCall("readByLastName", new EmployeesByLastNameCall(), "lastName"); descriptor.addReadObjectCall("readByID", new EmployeeByIDCall(), "employeeID");
Custom Calls can be invoked by your application at run time with a parameter value that will be passed into the Call via a DatabaseRow. The Call is expected to communicate with your data store and return a DatabaseRow with the appropriate data to build an instance of the appropriate object (in this example, an Employee), as defined by the Mappings in the Descriptor.
If your data store provides support for sequence numbers, you can configure your Descriptor to use sequence numbers.
descriptor.setSequenceNumberName("employee"); descriptor.setSequenceNumberFieldName("id");
To take advantage of sequence numbers you will also need to define a number of custom queries to be used by TopLink for querying and updating the sequence numbers. Custom queries are maintained by the TopLink DatabasePlatform. See the TopLink JavaDocs for more information.
The SDKDescriptor supports TopLink inheritance settings.
largeProjectDescriptor.setParentClass(Project.class);
Whether you configure subclass Descriptors to use tables in addition to the table(s) defined in the superclass Descriptor is determined by how your data store can store the data. Initially, you should try to define a single table in the root class Descriptor and not define any additional tables in the subclass Descriptors. Then your Calls can build up DatabaseRows for a single table, simply leaving out the fields that are not required for the particular subclass Descriptor.
The SDKDescriptor supports most other Descriptor properties without any special consideration, namely:
There are a few Descriptor properties that are currently unsupported by the TopLink SDK:
The TopLink SDK provides support for many of the DatabaseMappings in the base TopLink class library. In addition to the standard Mappings, the SDK provides four new Mappings that provide support for non-normalized, hierarchical data. For more information, see "SDK Mappings" .
The TopLink SDK supports all the base TopLink direct mappings:
The only Mapping that warrants special consideration is the SerializedObjectMapping. Any Read Calls that support Descriptors that have a SerializedObjectMapping must return the data for the SerializedObjectMapping as either a byte array (byte[]
) or as a hexadecimal string representation of a byte array. Likewise, TopLink will pass the data for the SerializedObjectMapping to any Write Call as a byte array (byte[]
).
The TopLink SDK provides support for a number of the base TopLink relationship mappings. Any functionality offered by unsupported Mappings can be found in alternative Mappings.
The TopLink SDK provides full support for private relationships. Whenever an object is written to the database, its private objects are also written to the database. Likewise, whenever an object is removed from the database, its private objects are also removed.
Your Calls do not need to be aware of private relationships. TopLink will invoke the appropriate Calls to write and delete the private objects when necessary. TopLink determines the appropriate Call for a particular private object by getting it from the object's DescriptorQueryManager.
The TopLink SDK provides full support for TopLink indirection, in all its various forms (basic, indirect container, and proxy). Indirection can be used to improve the performance of TopLink relationship mappings by delaying the reading of reference objects until they are actually needed by the original object or any of its client objects.
Your Calls do not need to be aware of indirection. TopLink will invoke the appropriate Call to read in reference objects when they are needed by the application. TopLink determines the appropriate Call for a particular (indirect) relationship by getting the custom selection query from the relationship's Mapping.
The TopLink SDK also supports TopLink container policies. A container policy allows you to specify which concrete class TopLink should use for storing query results; whether for a DatabaseQuery or for a CollectionMapping.
Calls do not need to be aware of the container policy. For ease of development, and to support JDK 1.1.x, your Calls simply use a java.util.Vector
to handle any collection of DatabaseRows. TopLink converts any Vector of DatabaseRows into the appropriate Collection (or Map) of business objects and vice versa. TopLink determines the appropriate concrete container class by getting the container policy from the appropriate DatabaseQuery or DatabaseMapping.
Due to limitations of the AggregateObjectMapping, the TopLink SDK does not support this Mapping. Nearly equivalent behavior is provided with "SDKAggregateObjectMapping" .
The OneToOneMapping is fully supported by the TopLink SDK. You will need to provide the Mapping with a custom selection query.
ReadObjectQuery query = new ReadObjectQuery(); query.setCall(new ReadAddressForEmployeeCall()); mapping.setCustomSelectionQuery(query);
The Read Call used for the custom selection query will need to be aware of whether the mapping uses a source foreign key or a target foreign key. It will also need to know which field(s) holds the primary and/or foreign key value(s). As a result, it may be useful to construct the Call with the Mapping as parameter (since the Mapping contains this information).
query.setCall(new ReadAddressForEmployeeCall(mapping));
The VariableOneToOneMapping is fully supported by the TopLink SDK. As with the OneToOneMapping, you must provide the Mapping with a custom selection query.
The DirectCollectionMapping is fully supported by the TopLink SDK. You should use a DirectCollectionMapping if your data store requires you to perform an additional query to fetch the direct values related to a given object.
If the direct values are included, in an hierarchical fashion, within the DatabaseRow for a given object, you should use "SDKDirectCollectionMapping" .
Provide the DirectCollectionMapping with several custom queries. Because the objects contained in a direct collection do not have a Descriptor, you need to provide the mapping with the queries that TopLink uses to insert and delete the reference objects.
The mappings are required in addition to the custom selection query.
DirectReadQuery readQuery = new DirectReadQuery(); readQuery.setCall(new ReadResponsibilitiesForEmployeeCall()); mapping.setCustomSelectionQuery(readQuery); DataModifyQuery insertQuery = new DataModifyQuery(); insertQuery.setCall(new InsertResponsibilityForEmployeeCall()); mapping.setCustomInsertQuery(insertQuery); DataModifyQuery deleteAllQuery = new DataModifyQuery(); deleteAllQuery.setCall(new DeleteResponsibilitiesForEmployeeCall()); mapping.setCustomDeleteAllQuery(deleteAllQuery);
The Mapping does not need a custom update query because, if any of the reference objects change, all of them are simply deleted and re-inserted.
The Read and Delete Calls used for the this Mapping will need to be aware of which field(s) holds the primary key value(s). As a result, it may be useful to construct these Calls with the Mapping as parameter (since the Mapping contains this information).
readQuery.setCall(new ReadResponsibilitiesForEmployeeCall(mapping)); deleteAllQuery.setCall(new DeleteResponsibilitiesForEmployeeCall(mapping));
The OneToManyMapping is fully supported by the TopLink SDK. You should use a OneToManyMapping if, like a typical relational model, the reference objects have foreign keys to the source object (target foreign keys). But if the foreign keys are "forward-pointing" (source foreign keys) and are included, in an hierarchical fashion, within the DatabaseRow for a given object, you should use "SDKAggregateObjectMapping" .
You will need to provide the Mapping with a custom selection query.
ReadAllQuery readQuery = new ReadAllQuery(); readQuery.setCall(new ReadManagedEmployeesForEmployeeCall()); mapping.setCustomSelectionQuery(readQuery);
Optionally, you can provide the Mapping with a custom delete-all query. If this query is present, TopLink will use it as a performance optimization to delete all the components in the relationship with a single query instead of deleting them one-by-one, when appropriate (for example, when the relationship is private to the containing object).
DeleteAllQuery deleteAllQuery = new DeleteAllQuery(); deleteAllQuery.setCall(new DeleteManagedEmployeesForEmployeeCall()); mapping.setCustomDeleteAllQuery(deleteAllQuery);
The Read and Delete Calls used for the this Mapping must be aware of which field(s) holds the primary key value(s). As a result, it may be useful to construct these Calls with the Mapping as parameter (since the Mapping contains this information).
readQuery.setCall(new ReadManagedEmployeesForEmployeeCall(mapping)); deleteAllQuery.setCall(new DeleteManagedEmployeesForEmployeeCall(mapping));
The AggregateCollectionMapping is fully supported by the TopLink SDK. The AggregateCollectionMapping is very similar to the OneToManyMapping; but it does not require a "back reference" Mapping from each of the target objects to the source object.
As with the OneToManyMapping, you need to provide the Mapping with a custom selection query and, optionally, a delete-all query.
Because the ManyToManyMapping is very closely tied to the relational implementation of many-to-many relationships, it is not supported by the TopLink SDK. A many-to-many relationship can be mapped with the TopLink SDK by using various combinations of the other collection Mappings (OneToManyMapping, SDKObjectCollectionMapping, etc.).
Because the StructureMapping is tied to the object-relational data model, it is not supported by the TopLink SDK. Nearly identical behavior is provided with "SDKAggregateObjectMapping" .
Because the ReferenceMapping is tied to the object-relational data model, it is not supported by the TopLink SDK. Nearly identical behavior can be found in the OneToOneMapping.
Because the ArrayMapping is tied to the object-relational data model, it is not supported by the TopLink SDK. Nearly identical behavior is provided with "SDKDirectCollectionMapping" .
Because the ObjectArrayMapping is tied to the object-relational data model, it is not supported by the TopLink SDK. Nearly identical behavior is provided with "SDKDirectCollectionMapping" .
Because the NestedTableMapping is tied to the object-relational data model, it is not supported by the TopLink SDK. Nearly identical behavior is provided with "SDKObjectCollectionMapping" .
The TopLink SDK provides four new Mappings that provide support for non-normalized, hierarchical data:
The SDKAggregateObjectMapping is similar in most ways to the standard AggregateObjectMapping. But there are several differences:
isNullAllowed
flag. Since the all the fields used to build the aggregate object are contained in a single field in the base DatabaseRow, there is no need to indicate whether multiple null field values should result in a null object placed in the attribute or a new instance of the aggregate object with all attributes set to null. If the attribute is null, the field value in the base DatabaseRow will be null. If the attribute contains an instance of the aggregate object with all null attributes, the field value in the base DatabaseRow will be an SDKFieldValue with a single, nested DatabaseRow whose field values will all be null.
The code for building an SDKAggregateObjectMapping is similar to that for the AggregateObjectMapping. You need to specify an attribute name, a reference class, and a field name.
SDKAggregateObjectMapping mapping = new SDKAggregateObjectMapping(); mapping.setAttributeName("period"); mapping.setReferenceClass(EmploymentPeriod.class); mapping.setFieldName("period"); descriptor.addMapping(mapping);
Because the data used to build the aggregate object is already nested within the base DatabaseRow (in other words, a separate query is not required to fetch the data for the aggregate object), the SDKAggregateObjectMapping does not require any custom queries. But any Read Call that builds the base DatabaseRow to be returned to TopLink must build the DatabaseRow properly. Likewise, any Write Calls must know what to expect in the DatabaseRows passed in by TopLink. Table 5-2 demonstrates an example of the values that would be contained in a typical DatabaseRow with data for an aggregate object.
In the example, an SDKAggregateObjectMapping maps the attribute period
to the field employee.period
and specifies the reference class as EmploymentPeriod
. The value in the field employee.period
is an SDKFieldValue with a single, nested DatabaseRow. This nested row will be used by the EmploymentPeriod Descriptor to build the aggregate object. The names of the fields in the nested DatabaseRow must match those expected by the EmploymentPeriod Descriptor.
The code in your Read Calls that builds the DatabaseRow to be returned to TopLink is straightforward.
DatabaseRow row = new DatabaseRow(); row.put("employee.id", new Integer(1)); row.put("employee.firstName", "Grace"); row.put("employee.lastName", "Hopper"); DatabaseRow nestedRow = new DatabaseRow(); nestedRow.put("employmentPeriod.startDate", "1943-01-01"); nestedRow.put("employmentPeriod.endDate", "1992-01-01"); Vector elements = new Vector(); elements.addElement(nestedRow); SDKFieldValue value = SDKFieldValue.forDatabaseRows(elements, "employmentPeriod"); row.put("employee.period", value);
The code in your Write Calls that deconstructs the DatabaseRow generated by TopLink is also straightforward.
Integer id = (Integer) row.get("employee.id"); String firstName = (String) row.get("employee.firstName"); String lastName = (String) row.get("employee.lastName"); SDKFieldValue value = (SDKFieldValue) row.get("employee.period"); DatabaseRow nestedRow = (DatabaseRow) value.getElements().firstElement(); String startDate = (String) nestedRow.get("employmentPeriod.startDate"); String endDate = (String) nestedRow.get("employmentPeriod.endDate");
The SDKDirectCollectionMapping is similar to the standard DirectCollectionMapping in that it represents a collection of objects that are not TopLink-enabled (the objects are not associated with any TopLink Descriptors; for example, Strings). But an SDKDirectCollectionMapping is different from the standard DirectCollectionMapping in that the data representing the collection of objects is nested within the base DatabaseRow - a separate query to the data store is not required to gather up the data, the way it is for a standard DirectCollectionMapping.
The code for building an SDKDirectCollectionMapping is straightforward. You need to specify the attribute and the field names. Optionally, you specify the element data type name. Whether the element audiotape name is required is determined by your data store. If your data store needs something to indicate the "type" of each element in the direct collection, then this setting can be used. Alternatively, this information can be determined by your Call.
SDKDirectCollectionMapping mapping = new SDKDirectCollectionMapping(); mapping.setAttributeName("responsibilitiesList"); mapping.setFieldName("responsibilities"); mapping.setElementDataTypeName("responsibility"); //optional descriptor.addMapping(mapping);
The SDKDirectCollectionMapping also has a container policy that allows you to specify the concrete implementation of the Collection interface that holds the direct collection.
mapping.useCollectionClass(Stack.class);
The SDKDirectCollectionMapping also allows you to specify the Class of objects to be placed in the direct collection or the DatabaseRow. If possible, TopLink will convert the objects contained by the direct collection before setting the attribute in the object or before passing the collection to your Call.
// Strings stored on the data store will be converted to Classes and vice versa mapping.setAttributeElementClass(Class.class); mapping.setFieldElementClass(String.class);
Because the data used to build the direct collection is already nested within the base DatabaseRow (in other words, a separate query is not required to fetch the data for the direct collection), the SDKDirectCollectionMapping does not require any custom queries. But any Read Call that builds the base DatabaseRow to be returned to TopLink must build the DatabaseRow properly. Likewise, any Write Calls must know what to expect in the DatabaseRows passed in by TopLink.
Table 5-3 demonstrates examples of the values that would be contained in a typical DatabaseRow with data for a direct collection.
In the example, an SDKDirectCollectionMapping maps the attribute responsibilitiesList
to the field employee.responsibilities
. The value in the field employee.responsibilities
is an SDKFieldValue that contains a collection of Strings that make up the direct collection.
The code in your Read Calls that builds the DatabaseRow to be returned to TopLink is straightforward.
DatabaseRow row = new DatabaseRow(); row.put("employee.id", new Integer(1)); row.put("employee.firstName", "Grace"); row.put("employee.lastName", "Hopper"); Vector responsibilities = new Vector(); responsibilities.addElement("find bugs"); responsibilities.addElement("develop compilers"); SDKFieldValue value = SDKFieldValue.forDirectValues(responsibilities, "responsibility"); row.put("employee.responsibilities", value);
The code in your Write Calls that deconstructs the DatabaseRow generated by TopLink is also straightforward.
Integer id = (Integer) row.get("employee.id"); String firstName = (String) row.get("employee.firstName"); String lastName = (String) row.get("employee.lastName"); SDKFieldValue value = (SDKFieldValue) row.get("employee.responsibilities"); Vector responsibilities = value.getElements();
The SDKAggregateCollectionMapping is more akin to the SDKAggregateObjectMapping than the standard AggregateCollectionMapping
. The SDKAggregateCollectionMapping
is used for an attribute that is a collection of aggregate objects that are all constructed from data contained in the base DatabaseRow. (The standard AggregateCollectionMapping
is more like a OneToManyMapping for a private relationship.)
All the data used by the reference (aggregate) Descriptor to build the aggregate collection is contained in a collection of nested DatabaseRows, not in the base DatabaseRow. The base DatabaseRow has a single field mapped to the aggregate collection attribute that contains an SDKFieldValue. This SDKFieldValue holds the nested DatabaseRows, and these nested DatabaseRows each contain all the fields needed by the reference Descriptor to build a single element in the aggregate collection.
The code for building an SDKAggregateCollectionMapping is similar to that for the SDKAggregateObjectMapping. You need to specify an attribute name, a reference class, and a field name.
SDKAggregateCollectionMapping mapping = new SDKAggregateCollectionMapping(); mapping.setAttributeName("phoneNumbers"); mapping.setReferenceClass(PhoneNumber.class); mapping.setFieldName("phoneNumbers"); descriptor.addMapping(mapping);
The SDKAggregateCollectionMapping also has a container policy that allows you to specify the concrete implementation of the Collection interface that holds the direct collection.
mapping.useCollectionClass(Stack.class);
Because the data used to build the aggregate collection is already nested within the base DatabaseRow (in other words, a separate query is not required to fetch the data for the aggregate collection), the SDKAggregateCollectionMapping does not require any custom queries. But any Read Call that builds the base DatabaseRow to be returned to TopLink must build the DatabaseRow properly. Likewise, any Write Calls must know what to expect in the DatabaseRows passed in by TopLink.
Table 5-4 demonstrates examples of the values that would be contained in a typical DatabaseRow with data for an aggregate collection.
In the example, an SDKAggregateCollectionMapping maps the attribute phoneNumbers
to the field employee.phoneNumbers
and specifies the reference class as PhoneNumber
. The value in the field employee.phoneNumbers
is an SDKFieldValue with a collection of nested DatabaseRows. These nested rows are used by the PhoneNumber Descriptor to build the elements of the aggregate collection. The names of the fields in the nested DatabaseRows must match those expected by the PhoneNumber Descriptor.
The code in your Read Calls that builds the DatabaseRow to be returned to TopLink is straightforward.
DatabaseRow row = new DatabaseRow(); row.put("employee.id", new Integer(1)); row.put("employee.firstName", "Grace"); row.put("employee.lastName", "Hopper"); Vector elements = new Vector(); DatabaseRow nestedRow = new DatabaseRow(); nestedRow.put("phone.areaCode", "888"); nestedRow.put("phone.number", "555-1212"); nestedRow.put("phone.type", "work"); elements.addElement(nestedRow); nestedRow = new DatabaseRow(); nestedRow.put("phone.areaCode", "800"); nestedRow.put("phone.number", "555-1212"); nestedRow.put("phone.type", "work"); elements.addElement(nestedRow); SDKFieldValue value = SDKFieldValue.forDatabaseRows(elements, "phone"); row.put("employee.phoneNumbers", value);
The code in your Write Calls that deconstructs the DatabaseRow generated by TopLink is also straightforward.
Integer id = (Integer) row.get("employee.id"); String firstName = (String) row.get("employee.firstName"); String lastName = (String) row.get("employee.lastName"); SDKFieldValue value = (SDKFieldValue) row.get("employee.phoneNumbers"); Enumeration enum = value.getElements().elements(); while (enum.hasMoreElements()) {DatabaseRow nestedRow = (DatabaseRow) enum.nextElement(); String areaCode = (String) nestedRow.get("phone.areaCode"); String number = (String) nestedRow.get("phone.number"); String type = (String) nestedRow.get("phone.type"); // do stuff with the values }
The SDKObjectCollectionMapping is similar to the standard OneToManyMapping, with one important difference. While the standard OneToManyMapping is used to map a collection of target objects that are stored on the database with foreign keys pointing back to the source object's primary key, the SDKObjectCollectionMapping is used to map a collection of target objects that are constructed from a collection of foreign keys contained in the base DatabaseRow that reference the target objects' primary keys. In other words, the foreign keys in a OneToManyMapping are "back-pointing"; the foreign keys in an SDKObjectCollectionMapping are "forward-pointing".
All the foreign keys used by the mapping to reference the target objects are contained in a collection of nested DatabaseRows, not in the base DatabaseRow. The base DatabaseRow has a single field mapped to the object collection attribute that contains an SDKFieldValue. This SDKFieldValue holds the nested DatabaseRows, and these nested DatabaseRows each contain all the fields needed to build a foreign key to an element object's primary key.
The code for building an SDKObjectCollectionMapping is similar to that for the OnetoManyMapping. You need to specify an attribute name, a reference class, a field name, and the source foreign key/target key relationships. Optionally, you specify the reference data type name. Whether the reference data type name is required is determined by your data store. If your data store needs something to indicate the "type" of each reference in the collection of foreign keys, then this setting can be used. Alternatively, this information can be determined by your Call. Because a separate query is required to read in the reference objects contained in the collection, you must build a custom selection query.
SDKObjectCollectionMapping mapping = new SDKObjectCollectionMapping(); mapping.setAttributeName("projects"); mapping.setReferenceClass(Project.class); mapping.setFieldName("projects"); mapping.setSourceForeignKeyFieldName("projectId"); mapping.setReferenceDataTypeName("project"); // optional mapping.setSelectionCall(new ReadProjectsForEmployeeCall()); descriptor.addMapping(mapping);
The SDKObjectCollectionMapping also has a container policy that allows you to specify the concrete implementation of the Collection interface that holds the collection of objects.
mapping.useCollectionClass(Stack.class);
Any Read Call that builds the base DatabaseRow to be returned to TopLink must build the DatabaseRow properly. Likewise, any Write Calls must know what to expect in the DatabaseRows passed in by TopLink. Table 5-5 demonstrates an example of the values that would be contained in a typical DatabaseRow with data for a collection of foreign keys.
In the example, an SDKObjectCollectionMapping maps the attribute projects
to the field employee.projects
and specifies the reference class as Project
. The value in the field employee.projects
is an SDKFieldValue with a collection of nested DatabaseRows.
Nested rows contain foreign keys that will be used by the Mapping's custom selection query to read in the elements of the object collection. The names of the fields in the nested DatabaseRows must match those expected by the custom selection query's Call.
The code in your Read Calls that builds the DatabaseRow to be returned to TopLink is straightforward.
DatabaseRow row = new DatabaseRow();
ow.put("employee.id", new Integer(1));
row.put("employee.firstName", "Grace");
row.put("employee.lastName", "Hopper");
Vector elements = new Vector();
DatabaseRow nestedRow = new DatabaseRow();
nestedRow.put("project.projectId", new Integer(42));
elements.addElement(nestedRow);
nestedRow = new DatabaseRow();
nestedRow.put("project.projectId", new Integer(17));
elements.addElement(nestedRow);
SDKFieldValue value = SDKFieldValue.forDatabaseRows(elements, "project");
row.put("employee.projects", value);
The code in your Write Calls that deconstructs the DatabaseRow generated by
TopLink is also straightforward.
Integer id = (Integer) row.get("employee.id");
String firstName = (String) row.get("employee.firstName");
String lastName = (String) row.get("employee.lastName");
SDKFieldValue value = (SDKFieldValue row.get("employee.projects");
Enumeration enum = value.getElements().elements();
while (enum.hasMoreElements(DatabaseRow nestedRow = (DatabaseRow)
enum.nextElement();
Object projectId = nestedRow.get("project.projectId");
// do stuff with the foreign key
}
After you have developed your Accessor and your Calls and have mapped your object model to your data store, you can configure and log in to a DatabaseSession and read and write your objects. There are several steps to configuring and logging in to a DatabaseSession for the TopLink SDK:
If you are using sequence numbers and you would like TopLink to manage them for you, you may need to create your own subclass of oracle.toplink.sdk.SDKPlatform
. If you are not using sequences numbers, you can simply use the default behavior in SDKPlatform and ignore this section.
TopLink uses the Platform classes to isolate the database platform-specific implementations of two major activities:
Since the TopLink SDK is generally unconcerned with SQL generation, probably the only reason you might want to develop your own Platform is if your data store provided a mechanism for generating sequence numbers. If this is the case, you will need to create your subclass and override the appropriate methods for building the Calls that will read and update sequence numbers.
The sequence number Read Call should be built and returned by the method buildSelectSequenceCall()
. This Call will be invoked by TopLink when TopLink needs to read the value of a specific sequence number. The DatabaseRow passed into the Call will contain one field: the field name will be the sequenceNameFieldName
(as set in the SDKLogin); the field value will be the name of the sequence number whose current value should be returned by the Call.
The sequence number Update Call should be built and returned by the method buildUpdateSequenceCall()
. This Call will be invoked by TopLink when TopLink needs to update the value of a specific sequence number. The DatabaseRow passed into the Call will contain two fields:
sequenceNameFieldName
(as set in the SDKLogin); the field value will be the name of the sequence number whose value should be updated by the Call.
sequenceCounterFieldName
(again, as set in the SDKLogin); the field value will be the new value of the sequence number identified by the first field.
Once you have established whether you need a custom Platform, you can construct and configure your SDKLogin with it.
SDKLogin login = new SDKLogin(new EmployeePlatform());
If you do not need a custom Platform, you can simply use the default constructor for SDKLogin.
SDKLogin login = new SDKLogin();
If you are using a custom Accessor to maintain a connection to your data store, you will need to configure the Login to use it. This will allow TopLink to construct a new instance of your Accessor whenever a connection to the data store is required. If you are not using a custom Accessor, you do not need to set this property, and the Login will be configured to use the SDKAccessor class by default.
login.setAccessorClass(EmployeeAccessor.class);
After these settings are configured, you can configure the values of the more standard Login properties.
login.setUserName("user"); login.setPassword("password"); login.setSequenceTableName("sequence"); login.setSequenceNameFieldName("name"); login.setSequenceCounterFieldName("count");
You can also store other, non-TopLink-related properties, in the Login. These properties can be used by your custom Accessor when it connects to the data store.
login.setProperty("foo", aFoo); Foo anotherFoo = (Foo) login.getProperty("foo");
After you have configured your Login, you can build your TopLink Project. You create an instance of oracle.toplink.sessions.Project
, passing it your Login. Then you add your Descriptors to the Project.
Project project = new Project(login);
project.addDescriptor(buildEmployeeDescriptor());
project.addDescriptor(buildAddressDescriptor());
project.addDescriptor(buildProjectDescriptor());
// etc.
Finally, after you have your TopLink Project built, you can obtain a DatabaseSession (or ServerSession) and log in.
DatabaseSession session = project.createDatabaseSession(); session.login();
Now you can use the Session to query for objects, acquire a UnitOfWork, modify objects, and so on.
Vector employees = session.readAllObjects(Employee.class); Employee employee = (Employee) employees.firstElement(); UnitOfWork uow = session.acquireUnitOfWork(); Employee employeeClone = uow.registerObject(employee); employeeClone.setSalary(employeeClone.getSalary() + 50); uow.commit();
When you are finished with the Session, you can log out.
session.logout();
Currently, there are three major Session features that are unsupported by the TopLink SDK:
TopLink enables you to read and write objects from and to XML files. In fact, TopLink itself reflectively uses this capability to store the Descriptors, Mappings, and other objects that make up a TopLink Project. This capability to perform Object-XML (O-X) Mapping allows your applications to deal exclusively with objects instead of having to deal with the intricacies of XML parsing and deconstruction. This can be particularly helpful for applications that deal with exchanging data with other applications (for example, legacy applications or business partner applications).
There are not many differences between configuring your application to use standard TopLink and configuring it to use the XML extension in its default configuration.
XMLFileLogin login = new XMLFileLogin(); login.setBaseDirectoryName("C:\Employee Database");// set up the sequences
login.setSequenceRootElementName("sequence"); login.setSequenceNameElementName("name"); login.setSequenceCounterElementName("count");// create the directories if they don't already exist
login.createDirectoriesAsNeeded();
Project project = new Project(login);
project.addDescriptor(buildEmployeeDescriptor());
project.addDescriptor(buildAddressDescriptor());
project.addDescriptor(buildProjectDescriptor());
// etc.
XMLDescriptor descriptor = new XMLDescriptor();
escriptor.setJavaClass(Employee.class);
descriptor.setRootElementName("employee");
descriptor.setPrimaryKeyElementName("id");
descriptor.setSequenceNumberName("employee");
descriptor.setSequenceNumberElementName("id");
// etc.
Limit yourself, at least initially, to using the standard Direct Mappings and the following relationship Mappings:
For the XML extension, OneToOneMappings and SDKObjectCollectionMappings require custom selection queries:
// 1:1 mapping OneToOneMapping addressMapping = new OneToOneMapping(); addressMapping.setAttributeName("address"); addressMapping.setReferenceClass(Address.class); addressMapping.privateOwnedRelationship(); addressMapping.setForeignKeyFieldName("addressId"); // build the custom selection query ReadObjectQuery addressQuery = new ReadObjectQuery(); addressQuery.setCall(new XMLReadCall(addressMapping)); addressMapping.setCustomSelectionQuery(addressQuery); descriptor.addMapping(addressMapping); // 1:n mapping SDKObjectCollectionMapping projectsMapping = new SDKObjectCollectionMapping(); projectsMapping.setAttributeName("projects"); projectsMapping.setReferenceClass(Project.class); projectsMapping.setFieldName("projects"); projectsMapping.setSourceForeignKeyFieldName("projectId"); projectsMapping.setReferenceDataTypeName("project"); // use convenience method to build the custom selection query projectsMapping.setSelectionCall(new XMLReadAllCall(projectsMapping)); descriptor.addMapping(projectsMapping);
DatabaseSession session = project.createDatabaseSession();session.login();
(new XMLSchemaManager(session)).createSequences();
You can now use the session to query for objects, acquire a UnitOfWork, modify objects, and so on.
Vector employees = session.readAllObjects(Employee.class); Employee employee = (Employee) employees.firstElement(); UnitOfWork uow = session.acquireUnitOfWork(); Employee employeeClone = uow.registerObject(employee); employeeClone.setSalary(employeeClone.getSalary() + 50); uow.commit();
When you are finished with the Session, you can log out.
session.logout();
There are two main areas of the XML extension that can be customized in a straightforward fashion:
The classes that implement the support for O-X mapping are in the package oracle.toplink.xml
. These classes actually make up a simple example of how to use the TopLink SDK as described in the previous section. In addition to implementing the various SDK interfaces, the XML package defines its own set of interfaces that you can implement to slightly alter how your objects are mapped to XML documents without re-implementing the entire SDK suite of interfaces and subclasses.
The XML extension has its own set of implementations of the various SDK interfaces and subclasses:
The XML extension also defines it own set of interfaces that allow you to plug in your own implementation classes to easily alter the way your objects are mapped to XML documents:
The XMLFileAccessor is a subclass of the SDKAccessor that defines how XML documents are stored in a native file system. As a subclass of SDKAccessor, the XMLFileAccessor is not required to implement any of the Accessor protocol; and, in fact, it only implements the method connect(DatabaseLogin, Session). The XMLFileAccessor uses the standard SDK method of Call execution, and does not support transaction processing, which is a limitation typical of native file systems.
In addition to the Accessor interface, the XMLFileAccessor implements the XMLAccessor interface. The XMLAccessor interface defines the protocol necessary for fetching Streams of data for reading and writing XML documents. The XMLFileAccessor implements this protocol by wrapping Files in Streams that can be used by the XMLCalls to read or write XML documents.
The XMLAccessor methods defined for fetching a Stream (either a java.io.Reader
or java.ioWriter
) typically requires three parameters:
The XMLFileAccessor takes the values of these three parameters and resolves them to a File that will be wrapped with a Stream (either a java.io.FileReader
or a java.io.FileWriter
) to be returned to the XMLCall for processing. The File name is calculated in the following fashion:
C:\EmployeeDB
).
C:\EmployeeDB\employee
).
C:\EmployeeDB\employee\1234
).
C:\EmployeeDB\employee\1234.xml
).
The XMLFileAccessor has one other setting that is configured via the XMLFileLogin: createsDirectoriesAsNeeded
. If this property is set to true, the Accessor will lazily create directories as they are required, including the base directory. If this property is set to false, which is the default, the Accessor throw an XMLDataStoreException if it encounters a request for an XML document that resolves to a file in a directory that does not exist.
XMLCall and its subclasses are the layer between the Call interface used by TopLink DatabaseQueries and the XML document accessing protocol provided by an XMLAccessor. The XMLFileAccessor implements the XMLAccessor protocol and is used by the XMLCalls; while the XMLCalls implement the Call interface and are used by the standard TopLink DatabaseQueries. The DatabaseQueries are used by your client application and your Descriptors to read and write objects.
All the XMLCalls have two properties in common:
The XMLStreamPolicy is yet another interface that defines a protocol for fetching Streams of data for reading and writing XML documents. The default implementation used by the XMLCalls is XMLAccessorStreamPolicy. This implementation simply delegates every request for a Stream to the XMLAccessor. This policy allows the default behavior to be overridden on a per-Call basis. For example, in certain situations, you might want to specify a specific File that holds an XML document instead of relying on the XMLFileAccessor to resolve which File to use. (In fact, this behavior is already provided by XMLFileStreamPolicy and supported by the methods XMLCall.setFile(File)
and XMLCall.setFileName(String)
.)
The XMLTranslator is the object used by the XMLCalls to translate an XML document into a TopLink DatabaseRow and vice versa. This is another pluggable interface that allows you to modify the behavior of the XMLCalls. The XMLCalls' default implementation of XMLTranslator is DefaultXMLTranslator.
There are a number of subclasses of XMLCall that provide concrete implementations of Call (and SDKCall). The main difference among these classes is their respective implementations of the method Call.execute(DatabaseRow, Accessor)
.
Six of these subclasses are used for manipulating objects:
Four subclasses are used to manipulate un-mapped DatabaseRows (for example, raw data):
With a few exceptions, the following object-level Calls all require an association with a DatabaseQuery to operate successfully. This happens automatically when you build a DatabaseQuery and configure it to use a custom call, which is required when using the TopLink SDK and/or the TopLink XML extension. The Read Calls that are associated with a relationship Mapping do not require an associated DatabaseQuery.
Given an object-level DatabaseQuery or a OneToOneMapping, an XMLReadCall gets the appropriate XML document and converts it to a DatabaseRow to be mapped to the appropriate object. If the XMLReadCall has a reference to a OneToOneMapping, it will extract the foreign key for the Mapping's relationship from the DatabaseRow passed in to the method execute(DatabaseRow, Accessor)
. If no Mapping is present, the XMLReadCall extracts the primary key for the Query's associated Descriptor from the DatabaseRow. This key is then used to find the appropriate XML document.
Given an object-level DatabaseQuery or an SDKObjectCollectionMapping, an XMLReadAllCall gets the appropriate XML documents and converts them to a Vector of DatabaseRows to be mapped to the appropriate objects. If the XMLReadAllCall has a reference to an SDKObjectCollectionMapping, it extracts the foreign keys for the Mapping's relationship from the DatabaseRow passed in to the method execute(DatabaseRow, Accessor)
. The foreign keys are then used to find the appropriate XML documents. If no Mapping is present, the XMLReadAllCall determines the root element name for the Query's associated Descriptor and returns all the DatasebaseRows for that root element name (a true read all).
An XMLInsertCall takes the DatabaseRow passed in to the method execute(DatabaseRow, Accessor
) and uses the primary key in it to find the appropriate XML document Stream. It then takes the "modify row" from the associated ModifyQuery, converts it to an XML document, and writes it out.
If the XML document already exists, an XMLDataStoreException is thrown.
Like an XMLInsertCall, an XMLUpdateCall takes the DatabaseRow passed in to the method execute(DatabaseRow, Accessor)
and uses the primary key in it to find the appropriate XML document Stream. It then takes the "modify row" from the associated ModifyQuery, converts it to an XML document, and writes it out.
If the XML document does not already exist, an XMLDataStoreException is thrown.
An XMLDeleteCall takes the DatabaseRow passed in to the method execute (DatabaseRow, Accessor)
and uses the primary key in it to find the appropriate XML document Stream. It then deletes this Stream.
If the XML document already existed, the Call returns a row count of one; if not, the Call returns a row count of zero.
An XMLDoesExistCall takes the DatabaseRow passed in to the method execute(DatabaseRow, Accessor)
and uses the primary key in it to find the appropriate XML document Stream. If the document exists, it is converted to a DatabaseRow that can be used to verify the object's existence; otherwise a null is returned.
Because XMLDataCalls are not associated with a DatabaseQuery, like the "Object-Level Calls" , a bit more up-front configuration is required. Every XMLDataCall requires a root element name and a set of ordered primary key element names. At run time, these settings are passed to the XMLStreamPolicy (and, usually, on to the XMLFileAccessor), along with the appropriate DatabaseRow, to determine the appropriate XML document Stream.
XMLDataReadCall call = new XMLDataReadCall(); call.setRootElementName("employee"); call.setPrimaryKeyElementName("id");
An XMLDataReadCall takes the DatabaseRow passed in to the method execute (DatabaseRow, Accessor)
and uses the primary key in it to find the appropriate XML document Stream and convert it to a DatabaseRow. To provide a consistent result object, this single DatabaseRow is returned inside a Vector.
If the XMLDataReadCall does not have any primary key element names set, it performs a simple read-all for all the XML documents with the specified root element name. These are converted and returned as a Vector of DatabaseRows.
XMLDataReadCalls can be further configured to specify which fields in the resulting DatabaseRow(s) should be returned and what their types should be.
XMLDataReadCall call = new XMLDataReadCall(); call.setRootElementName("employee"); call.setPrimaryKeyElementName("id"); call.setResultElementName("salary"); call.setResultElementType(java.math.BigDecimal.class);
An XMLDataInsertCall takes the DatabaseRow passed in to the method execute(DatabaseRow, Accessor)
and uses the primary key in it to find the appropriate XML document Stream. It then takes that same row, converts it to an XML document, and writes it out.
If the XML document already exists, an XMLDataStoreException is thrown.
An XMLDataUpdateCall takes the DatabaseRow passed in to the method execute(DatabaseRow, Accessor)
and uses the primary key in it to find the appropriate XML document Stream. It then takes that same row, converts it to an XML document, and writes it out.
If the XML document does not already exist, an XMLDataStoreException is thrown.
An XMLDataDeleteCall takes the DatabaseRow passed in to the method execute(DatabaseRow, Accessor)
and uses the primary key in it to find the appropriate XML document Stream. It then deletes this Stream.
If the XML document already existed, the Call returns a row count of one; if not, the Call returns a row count of zero.
XMLDescriptor is a subclass of SDKDescriptor that adds two bits of helpful behavior:
setRootElementName(String)
replaces the method setTableName(String)
; setPrimaryKeyElementName(String)
replaces setPrimaryKeyFieldName(String)
; and so on.
XMLPlatform is a subclass of SDKPlatform that implements the methods required to support sequence numbers: buildSelectSequenceCall(
) and buildUpdateSequenceCall()
. These methods build and return the XMLDataCalls that allow TopLink to use sequence numbers that are maintained in XML documents.
The root element name for these XML documents and the names of the elements used to hold the sequence name and sequence counter can be set by your application via the XMLFileLogin.
XMLFileLogin is a subclass of SDKLogin that allows for the configuration of the XMLFileAccessor and XMLPlatform. The XMLFileLogin is used to configure the following settings:
login.setBaseDirectoryName("C:\Employee Database");
login.setFileExtension(".xml");
login.setCreatesDirectoriesAsNeeded(true);
login.setSequenceRootElementName("sequence"); login.setSequenceNameElementName("name"); login.setSequenceCounterElementName("count");
XMLSchemaManager is a subclass of SDKSchemaManager that provides support for building the XML-based sequences required by your TopLink DatabaseSession. After you have built your TopLink Project and used it to create a DatabaseSession, you can log in and create the required sequences with the XMLSchemaManager.
DatabaseSession session = project.createDatabaseSession(); session.login(); SchemaManager manager = new XMLSchemaManager(session); manager.createSequences();
XMLAccessor is an interface that extends the oracle.toplink.internal.databaseaccess.Accessor
interface and, by default, is used by the XMLCalls to access the appropriate Stream for a given XML document.
You can provide your own implementation of this interface if you want TopLink to read and write your XML documents from and to something other than the native file system. If for example, your XML documents are accessed via a messaging service such the Java Messaging Service (JMS), you could develop an implementation of XMLAccessor that would translate the method calls into the appropriate invocations of the messaging service, whether it be reading, writing, or deleting an XML document. Once you have developed your custom Accessor, you could configure an XMLLogin to use it.
XMLLogin login = new XMLLogin();
login.setAccessorClass(XMLJMSAccessor.class);
login.setUserName("user");
login.setPassword("password");
// etc.
XMLTranslator is an interface that is used by the XMLCalls to convert XML documents to TopLink DatabaseRows and vice versa. Each XMLCall has its own XMLTranslator. By default, this is an instance of DefaultXMLTranslator. This can be overridden by your own custom implementation of XMLTranslator. The protocol defined by XMLTranslator is very simple:
read(java.io.Reader)
takes a Reader that streams over an XML document, converts that document into a DatabaseRow, and returns that DatabaseRow.
write(java.io.Writer, DatabaseRow)
takes a DatabaseRow and converts it into an XML document and writes that document out on the Writer.
The default XMLTranslator used by the XMLCalls, DefaultXMLTranslator, performs a fairly straightforward set of translations to convert a DatabaseRow into an XML document and vice versa. Here is a summary of the translations, expressed in terms of converting a DatabaseRow into an XML document (the reverse set of translations are just the opposite):
<?xml version="1.0"?> <employee> <!-- field values will go here --> </employee>
<?xml version="1.0"?> <employee>
<id>1</id> <firstName>Grace</firstName> <lastName>Hopper</lastName>
</employee>
<managedEmployees null="true"/>
<?xml version="1.0"?> <employee> <id>1</id> <firstName>Grace</firstName> <lastName>Hopper</lastName> <period> <employmentPeriod> <startDate>1943-01-01</startDate> <endDate>1992-01-01</endDate> </employmentPeriod> </period> </employee>
<?xml version="1.0"?> <employee> <id>1</id> <firstName>Grace</firstName> <lastName>Hopper</lastName> <responsibilities> <responsibility>find bugs</responsibility> <responsibility>develop compilers</responsibility> </responsibilities> </employee>
SDKObjectCollectionMapping
<?xml version="1.0"?> <employee> <id>1</id> <firstName>Grace</firstName> <lastName>Hopper</lastName> <phoneNumbers> <phone> <areaCode>888</areaCode> <number>555-1212</number> <type>work</type> </phone> <phone> <areaCode>800</areaCode> <number>555-1212</number> <type>home</type> </phone> </phoneNumbers> </employee>
The DefaultXMLTranslator delegates the actual translating to two other classes:
The DatabaseRowToXMLTranslator performs the translations previously mentioned, building and XML document from a DatabaseRow and writing it onto a Stream.
The XMLToDatabaseRowTranslator performs the reverse of the translations previously described, reading the XML document from a Stream and building a DatabaseRow. To accomplish this conversion, the XMLToDatabaseRowTranslator uses the Xerces XML parser to parse the XML document.
DatabaseLogin.setXMLParserJARFileNames(new String[] {"xerces.jar", "toplinksdkxerces.jar"});
The XML Zip file extension is an enhancement to the XML implementation of the SDK. This extension adds the flexibility of maintaining the XML data store in a group of archive files rather than in the directory/file structure of the standard XML data store. The format is very similar to the standard XML data store however, the directories, which essentially represent tables, are now replaced with archive files. The contents of the archive files are the XML documents.
Using the XML Zip file extension is straightforward. In most situations it only requires the addition of one line of code.
Typically, you need only configure your XMLLogin to use a different Accessor.
XMLLogin login = new XMLLogin(); login.setAccessorClass(XMLZipFileAccessor.class);
There is one other difference that you may encounter if you are configuring XMLCalls to access files directly. To access an XML document within an archive file, the call needs to know both the archive file location and the name of the XML document entry within the archive. Therefore, the setFileName()
message sent to an XMLCall needs to include both the archive file and the XML document entry name.
XMLReadCall call = new XMLReadCall(); call.setFileName("C:/Employee DataStore/employee.zip", "1.xml");
Only two classes make up the Zip file extension, these classes are in the package oracle.toplink.xml.zip
.
The XMLZipFileAccessor extends the XMLFileAccessor. It essentially performs the same function as its standard XML package counterpart with the exception of using the XMLZipFileStreamPolicy rather than the XMLFileStreamPolicy used in the XMLFileAccessor. There is no added functionality - it simply subclasses XMLFileAccessor to provide substitutability in the XMLLogin.
This class is the most significant change from the standard XML package. It handles the XML archive files. It returns streams for reading and writing from individual archive entries. It does not provide any additional functionality over its standard XML package counterpart, the XMLFileStreamPolicy. It transparently provides the same functionality, whilst handling the added complication of getting read/write streams from within an archive file.
|
Copyright © 2002 Oracle Corporation. All Rights Reserved. |
|