Centralizes control with loosely coupled handlers
Filters provide a central place for handling processing across multiple requests, as does a controller. Filters are better suited to massaging requests and responses for ultimate handling by a target resource, such as a controller. Additionally, a controller often ties together the management of numerous unrelated common services, such as authentication, logging, encryption, and so forth. Filtering allows for much more loosely coupled handlers, which can be combined in various permutations.
Filters promote cleaner application partitioning and encourage reuse. You can transparently add or remove these pluggable interceptors from existing code, and due to their standard interface, they work in any permutations and are reusable for varying presentations.
Declarative and flexible configuration
Numerous services are combined in varying permutations without a single recompile of the core code base.
Information sharing is inefficient
Sharing information between filters can be inefficient, since by definition each filter is loosely coupled. If large amounts of information must be shared between filters, then this approach might prove to be costly.
A controller provides a central place to handle control logic that is common across multiple requests. A controller is the initial access point of the request handling mechanism and delegates to an Application Controller to perform the underlying business processing and view generation functionality.
Centralizing control makes it easier to monitor control flow that also provides a choke point for illicit attempts to access the application. In addition, auditing a single entrance into the application requires fewer resources than distributing checks across all pages.
Promotes cleaner application partitioning and encourages reuse, as common code moves into a controller or is managed/delegated to by a controller.
Improves role separation
A controller promotes cleaner separation of team roles, since one role (software developer) can more easily maintain programming logic while another (web production) maintains markup for view generation.
Improves reusability and maintainability
Application components and subsystems are more generic and can be reused for various types of clients, since the application interfaces are not polluted with protocol-specific data types.
Using Context Objects helps remove dependencies on protocol-specific code that might tie a runtime environment to a container, such as a web server or an application server. Testing is easier when such dependencies are limited or removed, since automated testing tools, such as JUnit can work directly with Context Objects.
Reduces constraints on evolution of interfaces
Interfaces that accept a Context Object, instead of the numerous objects that the Context Object encapsulates, are less tied to these specific details that might constrain later changes. This is important when developing frameworks, but is also valuable in general.
There is a modest performance hit, because state is transferred from one object to another. This reduction in performance is usually far outweighed by the benefits of improved reusability and maintainability of the application subcomponents.
Separating common action and view management code, into its own set of classes makes the application more modular. This modularity might also ease testing, since aspects of the Application Controller functionality will not be tied to a web container.
You can reuse the common, modular components.
Functionality can be added to the request handling mechanism in a predictable way, and independent of protocol-specific or network access code. Declarative flow control reduces coupling between code and navigation/flow control rules, allowing these rules to be modified without recompiling or modifying code.
Improves application partitioning, reuse, and maintainability
Separating HTML from processing logic, such as control logic, business logic, data access logic, and formatting logic, results in improved application partitioning.
In JSP, for example, try to minimize the amount of Java programming logic that is embedded within the page, and try to minimize the amount of HTML markup that is embedded within programming code. Failing to minimize either of these scenarios results in a cumbersome and unwieldy situation, especially in larger projects.
Programming logic that is extracted from JSP and encapsulated within helpers is reusable, reducing the duplication of embedded view code and easing maintenance.
Improves role separation
Using helpers to separate processing logic from views also reduces the potential dependencies that individuals fulfilling different roles might have on the same resources. For example, if processing logic is embedded within a view, then a software developer is tasked with maintaining code that is embedded within HTML markup. Then, a web production team member would need to modify page layout and design components that are mingled with Java code. Neither individual fulfilling these roles is likely to be familiar with the implementation specifics of the other individual's work, raising the likelihood of accidental modifications introducing bugs into the system.
As processing logic is extracted into separate helper components, testing individual pieces of code becomes much easier. Testing a piece of code that is embedded within a JSP is much more difficult than testing code that is encapsulated within a separate class.
Helper usage mirrors scriptlets
One important reason for extracting processing logic from a page is to reduce the implementation details that are embedded directly within the page. It is important to keep in mind, though, that it is not a panacea to simply use JavaBeans or custom tags within your JSP. The use of certain generic helpers only replaces the embedded Java code with a references to helpers that, in effect, produce the same problem of exposing the implementation details, as opposed to the intent of the code.
An example is the use of a conditional helper, such as a custom tag that models the conditional logic of an 'if' statement. Heavy usage of this sort of helper tag may simply mirror the scriptlet code that it is intended to replace. As a result, the resulting fragment continues to look like programming logic embedded within the page. Using helpers as scriptlets is a bad practice, although it is often done in an attempt to apply View Helper.
Improves modularity and reuse
The pattern promotes modular design. It is possible to reuse atomic portions of a template, such as a table of stock quotes, in numerous views, and to decorate these reused portions with different information. This pattern permits the table to be moved into its own module and simply included where necessary. This type of dynamic layout and composition reduces duplication, fosters reuse, and improves maintainability.
Adds role-based or policy-based control
A Composite View might conditionally include view template fragments based on runtime decisions, such as user role or security policy.
Managing changes to portions of a template is much more efficient when the template is not hardcoded directly into the view markup. When the template content is kept separate from the view, you can modify modular portions of template content independently of the template layout. Additionally, these changes are available to the client immediately, depending on the implementation strategy. You can more easily manage modifications to page layout as well, since changes are centralized.
Aggregating atomic pieces of the display to create a single view introduces the potential for display errors, since subviews are page fragments. This is a limitation that can become a maintainability issue.
When you use this pattern, be aware that subviews must not be complete views. You must account for tag usage quite strictly in order to create valid Composite Views, and this can become a maintainability issue.
Generating a display that includes numerous subviews might slow performance. Runtime inclusion of subviews will result in a delay each time the page is served to the client. In environments that have specific response times requirements, such performance slowdowns, though typically extremely minimal, might not be acceptable. An alternative is to move the subview inclusion to translation time, though this limits the subview to changing only when the page is retranslated.
Service to Worker
Centralizes control and improves modularity, reusability, and maintainability
Centralizing control and request-handling logic improves the system's modularity and reusability. Common request processing code can be reused, reducing the sort of duplication that occurs if processing logic is embedded within views. Less duplication means improved maintainability, since changes are made in a single location.
Improves role separation
Centralizing control and request-handling logic separates it from view creation code and allows for a cleaner separation of team roles. Software developers can focus on maintaining programming logic while page authors can focus on the view.
Leverages frameworks and libraries
Frameworks and libraries realize and support specific patterns. The Dispatcher View approach is supported in standard and custom libraries that provide view adapters and transformers and, for limited use, data access tags. An example of one standard library is JSTL.
Introduces potential for poor separation of the view from the model and control logic
Since business processing is managed by the view, the Dispatcher View approach is inappropriate for handling requests that rely upon heavy business processing or data access. Embedding processing logic of any form within a view should be minimized. The overriding goal is to separate control and business logic from the view and to localize disparate logic.
Separates processing logic from view and improves reusability
View Helpers adapt and convert the presentation model for the view. Processing logic that might otherwise be embedded within the view is extracted into reusable helpers, exposing less of the code's implementation details and more of its intent.
Reduces coupling, improves maintainability
The Business Delegate reduces coupling between the presentation tier and the business tier by hiding all business-tier implementation details. Managing changes is easier because they are centralized in the Business Delegate.
Translates business service exceptions
The Business Delegate translates network or infrastructure-related exceptions into business exceptions, shielding clients from the knowledge of the underlying implementation specifics.
When a Business Delegate encounters a business service failure, the delegate can implement automatic recovery features without exposing the problem to the client. If the recovery succeeds, the client doesn't need to know about the failure. If the recovery attempt fails, then the Business Delegate needs to inform the client of the failure. Additionally, the Business Delegate methods can be synchronized, if necessary.
Exposes a simpler, uniform interface to the business tier
The Business Delegate is implemented as a simple Java object, making it easier for application developers to use business-tier components without dealing with the complexities of the business-service implementations.
The Business Delegate can cache information on behalf of the presentation-tier components to improve performance for common service requests.
Introduces an additional layer
The Business Delegate adds a layer that might be seen as increasing complexity and decreasing flexibility. However, the benefits of the pattern outweigh such drawbacks.
Location transparency is a benefit of this pattern, but it can lead to problems if you don't keep in mind where the Business Delegate resides. A Business Delegate is a client-side proxy to a remote service. Even though a Business Delegate is implemented as a local POJO, when you call a method on a Business Delegate, the Business Delegate typically has to make a call across the network to the underlying business service to fulfill this request. Therefore, try to keep calls to the Business Delegate to a minimum to prevent excess network traffic.
The Service Locator encapsulates the complexity of the service lookup and creation process and keeps it hidden from the client.
Provides uniform service access to clients
The Service Locator provides a useful and precise interface that all clients can use. The interface ensures that all types of clients in the application uniformly access business objects, in terms of lookup and creation. This uniformity reduces development and maintenance overhead.
Facilitates adding EJB business components
Because clients of enterprise beans are not aware of the EJB Home objects, you can add new EJB Home objects for enterprise beans developed and deployed at a later time without impacting the clients. JMS clients are not directly aware of the JMS connection factories, so you can add new connection factories without impacting the clients.
Improves network performance
The clients are not involved in lookup and object creation. Because the Service Locator performs this work, it can aggregate the network calls required to look up and create business objects.
Improves client performance by caching
The Service Locator can cache the initial context objects and references to the factory objects (EJBHome, JMS connection factories). Also, when accessing web services, the Service Locator can cache WSDL definitions and endpoints.
Introduces a layer that provides services to remote clients
Session Façades introduce a layer between clients and the business tier to provide coarse-grained remote services. For some applications, this might be unnecessary overhead, especially if the business tier is implemented without using EJB components. However, Session Façades have almost become a necessity in J2EE applications because they provide remote services and leverage the benefits of an EJB container, such as transactions, security, and lifecycle management.
Exposes a uniform coarse-grained interface
A Session Façade encapsulates the complexity of the underlying business component interactions and presents the client with a simpler coarse-grained service-layer interface to the system that is easy to understand and use. In addition, by providing a Business Delegate for each Session Façade, you can make it easier for client-side developers to leverage the power of Session Façades.
Reduces coupling between the tiers
Using a Session Façade decouples the business components from the clients, and reduces tight coupling and dependency between the presentation and business tiers. You can additionally implement Application Services to encapsulate the complex business logic that acts on several Business Objects. Instead of implementing the business logic, the Session Façades can delegate the business logic to Application Services to implement.
Promotes layering, increases flexibility and maintainability
When using Session Façades with Application Services, you increase the flexibility of the system by layering and centralizing interactions. This provides a greater ability to cope with changes due to reduced coupling. Although changes to the business logic might require changes in the Application Services or even the Session Façades, the layering makes such changes more manageable.
Using Application Services, you reduce the complexity of Session Façades. Using Business Delegate for accessing Session Façades reduces the complexity of client code. This helps make the system more maintainable and flexible.
Improves performance, reduces fine-grained remote methods
The Session Façade can also improve performance because it reduces the number of remote network invocations by aggregating various fine-grained interactions into a coarse-grained method. Furthermore, the Session Façades are typically located in the same process space as the participating business components, enabling faster communication between the two.
Centralizes security management
You can manage security policies for the application at the Session Façade level, since this is the tier presented to the clients. Because of the Session Façade's coarse-grained access, it is easier and more manageable to define security policies at this level rather than implementing security policies for each participating fine-grained business component.
Centralizes transaction control
The Session Façade represents the coarse-grained remote access point to business-tier services, so centralizing and applying transaction management at the Session Façade layer is easier. The Session Façade offers a central place for managing and defining transaction control in a coarse-grained fashion. This is simpler than managing transactions in finer-grained business components or at the client side.
Exposes fewer remote interfaces to clients
The Session Façade presents a coarse-grained access mechanism to the business components, which greatly reduces the number of business components exposed to the client. This reduces the scope for application performance degradation because the number of interactions between the clients and the Session Façade is lower than the number of direct interactions between the client and the individual business components.
Centralizes reusable business and workflow logic
Application Services create a layer of services encapsulating the Business Objects layer. This creates a centralized layer that encapsulates common business logic acting upon multiple Business Objects.
Improves reusability of business logic
Application Services create a set of reusable components that can be reused across various use case implementations. Application Services encapsulate inter-Business Object operations.
Avoids duplication of code
By creating a centralized reusable layer of business logic, the Application Services avoid duplication of code in the clients, such as facades and helpers, and in other Application Services.
Simplifies facade implementations
Business logic is moved away from the service facades, whether they are implemented as Session Façade or POJO facades. The facades become simpler because they are only responsible for aggregating Application Services interaction and delegating to one or more Application Services to fulfill the requested service.
Introduces additional layer in the business tier
The Application Service creates an additional layer in the business tier, which you might consider an unnecessary overhead for some applications. However, the additional layer provides for a powerful abstraction in the application to encapsulate reusable common business logic.
Promotes object-oriented approach to the business model implementation
Business Objects create a logical layer of responsibility that reflects object model implementation of the business model. For OO multi-tier applications, this is a natural approach to implementing the business tier using objects.
Centralizes business behavior and state, and promotes reuse
Business objects provide a centralized and modular approach to multi-tier architecture by abstracting and implementing the business logic, rules and behavior in a separate set of components. Such centralization provides for and promotes reuse of the abstractions in the business tier across use cases and different kinds of clients.
Avoids duplication of and improves maintainability of code
Due to centralization of business state and behavior, clients avoid embedding business logic and thereby avoid duplication of code. Using Business Objects improves the maintainability of the system as a whole because they promote reusability and centralization of code.
Separates persistence logic from business logic
Persistence mechanism can be hidden and separated from the Business Objects. You can use various persistence strategies such as JDO, custom JDBC, object-relational mapping tools, or entity beans to facilitate persistence of Business Objects.
Promotes service-oriented architecture
Business objects act as a centralized object model to all clients in an application. You can build various services on top of Business Object, which can also use other services such as persistence, business rules, integration, and so forth. This facilitates separation of concerns in a multi-tiered application and facilitates service-oriented architecture.
POJO implementations can induce, and are susceptible to, stale data
When you implement Business Objects as POJOs in a distributed multi-tier application, a Business Object might end up instantiated in multiple VMs or containers. The application is responsible for ensuring that these multiple instances maintain consistency and integrity of the business data. This might require synchronization of state among the instances, and between the instances and the data store, to guarantee the integrity of the business data and avoid stale data. On the other hand, when you implement the Business Objects as entity beans, the container handles the creation, synchronization, and other lifecycle management of all instances so you don't need to address this issue of data integrity.
Adds extra layer of indirection
In some applications, such strict separation of concerns might be considered a formality rather than a necessity. This is especially true for applications with a trivial business model and business logic, or if the data model is a sufficient representation of the business model and letting presentation components access the data in the resource tier directly using Data Access Objects is simpler. However, many designers might start out with the assumption that the data model is sufficient and then later realize that such an assumption was premature due to insufficient analysis. Fixing this problem later in the development stage can be expensive.
Can result in bloated objects
Certain use cases may only require the intrinsic behavior encapsulated within a Business Object. A Business Object tends to get bloated as more and more use case-specific behavior is implemented in it. To avoid bloating the Business Objects, implement any extrinsic business behavior specific to a particular use case or client and business behavior that acts on multiple Business Objects in the form of an Application Service, rather than including it in the Business Object.
When parents and dependent Business Objects are implemented using Composite Entity with POJO dependent objects, you can reduce the number of fine-grained entity beans. When using EJB 2.x, you might want to implement the dependent objects as local entity beans and leverage other features, such as CMR and CMP. This improves the maintainability of the application.
Improves network performance
Aggregation of the parent and dependent Business Objects into fewer coarse-grained entity beans improves overall performance for EJB 1.1. This reduces network overhead because it eliminates inter-entity bean communication. For EJB 2.x, implementing Business Objects as Composite Entity using local entity beans has the same benefit because all entity bean communications are local to the client. However, note that co-location is still less efficient than working with POJO Business Objects due to container services for lifecycle, security, and transaction management for entity beans.
Reduces database schema dependency
Composite Entity provides an object view of the data in the database. The database schema is hidden from the clients, since the mapping of the entity bean to the schema is internal to the Composite Entity. Changes to the database schema might require changes to the Composite Entity beans. However, the clients are not affected since the Composite Entity beans do not expose the schema to the external world.
Increases object granularity
With a Composite Entity, the client typically looks up the parent entity bean instead of locating numerous fine-grained dependent entity beans. The parent entity bean acts as a Facade [GoF] to the dependent objects and hides the complexity of dependent objects by exposing a simpler interface. Composite Entity avoids fine-grained method invocations on the dependent objects, decreasing the network overhead.
Facilitates composite transfer object creation
The Composite Entity can create a composite Transfer Object that contains all the data from the entity bean and its dependent objects, and returns the Transfer Object to the client in a single method call. This reduces the number of remote calls between the clients and the Composite Entity.
Reduces network traffic
A Transfer Object carries a set of data values from a remote object to the client in one remote method call, thereby reducing the number of remote calls. The reduced chattiness of the application results in better network performance.
Simplifies remote object and remote interface
The remote objects provides coarse-grained getData() and setData() methods to get and set the transfer object carrying a set of values. This eliminates fine-grained get and set methods in the remote objects.
Transfers more data in fewer remote calls
Instead of multiple client calls over the network to the remote object to get attribute values, you can provide a single method call that returns aggregated data. When considering this pattern, you must consider the trade-off of fewer network calls versus transmitting more data per call.
Reduces code duplication
You can use the Entity Inherits Transfer Object strategy to reduce or eliminate the duplication of code between the entity and its transfer object.
Introduces stale transfer objects
Using Transfer Objects might introduce stale data in different parts of the application. However, this is a common side effect whenever you disconnect data from its remote source, because the remote objects typically do not keep track of all the clients that obtained the data, to propagate changes.
Increases complexity due to synchronization and version control
When using the Updatable Transfer Objects strategy, you must design for concurrent access. This means that the design might get more complex due to the synchronization and version control mechanisms.
Transfer Object Assembler
Separates business logic, simplifies client logic
When the client includes logic to manage the interactions with distributed components, clearly separating business logic from the client tier becomes difficult. The Transfer Object Assembler contains the business logic to maintain the object relationships and to construct the composite transfer object representing the model. The client doesn't need to know how to construct the model or know about the different components that provide data to assemble the model.
Reduces coupling between clients and the application model
The Transfer Object Assembler hides the complexity of the construction of model data from the clients and reduces coupling between clients and the model. With loose coupling, if the model changes, then the Transfer Object Assembler requires a corresponding change and insulates the clients from this change.
Improves network performance
The Transfer Object Assembler reduces the number of remote calls required to obtain an application model from the business tier, since typically it constructs the application model in a single method invocation. However, the composite Transfer Object might contain a large amount of data. This means that, though using the Transfer Object Assembler reduces the number of network calls, the amount of data transported in a single call increases. Consider this trade-off when you use this pattern.
Improves client performance
The server-side Transfer Object Assembler constructs the model as a composite Transfer Object without using any client resources. The client does not spend any resources in assembling the model.
Can introduce stale data
The Transfer Object Assembler constructs an application model as a composite Transfer Object on demand, as a snapshot of the current state of the business model. Once the client obtains the composite Transfer Object, it is local to the client and is not network aware. Subsequent changes made to the business components are not propagated to the application model. Therefore, the application model can become stale after it is obtained.
Value List Handler
Provides efficient alternative to EJB finders
Value List Handler provides an alternative way to perform searches, and a way to avoid using EJB finders, which are inefficient for large searches.
Caches search results
The result set needs to be cached when a client must display the subset of the results of a large result set. The result set might be a collection of transfer objects that can be iterated over when using the DAO Transfer Object Collection strategy, or the results might be a special List implementation that encapsulates a JDBC RowSet when you use the DAO RowSet Wrapper List strategy.
Provides flexible search capabilities
You can implement a Value List Handler to be flexible by providing ad-hoc search facilities, constructing runtime search arguments using template methods, and so on. In other words, a Value List Handler developer can implement intelligent searching and caching algorithms without being limited by the EJB finder methods.
Improves network performance
Network performance improves because only a requested subset of the results, rather than the entire result set, is sent to the client on demand. If the client/user displays the first few results and then abandons the query, the network bandwidth is not wasted, since the data is cached on the server side and never sent to the client.
Allows deferring entity bean transactions
Caching results on the server side and minimizing finder overhead might improve transaction management. For example, a query to display a list of books uses a Value List Handler to obtain the list without using the Book entity bean's finder methods. At a later point, when the user wants to modify a book in detail, the client invokes a Session Façade that locates the required Book entity bean instance with appropriate transaction semantics as needed for this use-case.
Promotes layering and separation of concerns
The Value List Handler encapsulates list management behavior in the business tier and appropriately uses Data Access Object in the integration tier. This promotes layering in the application, and keeps business logic in the business-tier components and data access logic in Data Access Objects.
Creating a large list of Transfer Object can be expensive
When the Data Access Object executes a query, and creates a collection of Transfer Objects, it can consume significant resources if the results of the query returns a large number of matching records. Instead of creating all the Transfer Objects instances, limit the number of rows retrieved by the DAO in the query by specifying the maximum number of results the DAO fetches from the database. You might also want to use the DAO Cached RowSet and RowSet Wrapper List strategy.
Data Access Object
Centralizes control with loosely coupled handlers
Filters provide a central place for handling processing across multiple requests, as does a controller. Filters are better suited to massaging requests and responses for ultimate handling by a target resource, such as a controller. Additionally, a controller often ties together the management of numerous unrelated common services, such as authentication, logging, encryption, and so on. Filtering allows for much more loosely coupled handlers, which can be combined in various permutations.
Clients can leverage the encapsulation of data sources within the Data Access Objects to gain transparency to the location and implementation of the persistent storage mechanisms.
Provides object-oriented view and encapsulates database schemas
The clients use transfer objects or data cursor objects (RowSet Wrapper List strategy) to exchange data with the Data Access Objects. Instead of depending on low-level details of database schema implementations, such as ResultSets and RowSets, where the clients must be aware of table structures, column names, and so on, the clients handle data in an object-oriented manner using the transfer objects and data cursors.
Enables easier migration
A layer of DAOs makes it easier for an application to migrate to a different database implementation. The clients have no knowledge of the underlying data store implementation. Thus, the migration involves changes only to the DAO layer.
Reduces code complexity in clients
Since the DAOs encapsulate all the code necessary to interact with the persistent storage, the clients can use the simpler API exposed by the data access layer. This reduces the complexity of the data access client code and improves the maintainability and development productivity.
Organizes all data access code into a separate layer
Data access objects organize the implementation of the data access code in a separate layer. Such a layer isolates the rest of the application from the persistent store and external data sources. Because all data access operations are now delegated to the DAOs, the separate data access layer isolates the rest of the application from the data access implementation. This centralization makes the application easier to maintain and manage.
Adds extra layer
The DAOs create an additional layer of objects between the data client and the data source that needs to be designed and implemented, to leverage the benefits of this pattern. While this layer might seem to be extra development and run-time overhead, it is typically necessary in order to decouple the data access implementation from the other parts of the application.
Needs class hierarchy design
When you use a factory strategy, the hierarchy of concrete factories and the hierarchy of concrete products (DAOs) produced by the factories need to be designed and implemented. Consider this additional effort if you think you'll need this extra flexibility, because it increases the complexity of the design. Use the DAO Factory Method Strategy if that meets your needs, and use DAO Abstract Factory Strategy only if absolutely required.
Introduces complexity to enable object-oriented design
While the RowSet Wrapper List strategy encapsulates the data access layer dependencies and JDBC APIs, and exposes an object-oriented view of the results data, it introduces considerable complexity in your implementation. You need to decide whether its benefits outweigh the drawbacks of using the JDBC RowSet API in the Cached RowSet strategy, or the performance drawback of using the Transfer Object Collection strategy.
Integrates JMS into enterprise applications
The Service Activator enables you to leverage the power of JMS in POJO enterprise applications using the POJO Service Activator strategy, and in EJB enterprise applications using the MDB Service Activator strategy. Regardless of what platform you are running your application on, as long as you have a JMS runtime implementation, you can implement and use Service Activator to provide asynchronous processing capabilities in your application.
Provides asynchronous processing for any business-tier component
Using the Service Activator pattern lets you provide asynchronous invocation on all types of enterprise beans, including stateless session beans, stateful session beans, and entity beans. The Service Activator acts as an intermediary between the client and the business service, to enable asynchronous invocation of any component that provides the business service implementation.
Enables standalone JMS listener
The POJO Service Activator can be run as a standalone listener without using any container support. However, in a mission-critical application, Service Activator needs to be monitored to ensure availability. The additional management and maintenance of this process can add to application support overhead. An MDB Service Activator might be a better alternative because it will be managed and monitored by the application server.
Creating a custom persistence framework is a complex task
Implementing Domain Store and all the features required for transparent persistence is not a simple task due to the nature of the problem and due to complex interactions between several participants and this pattern framework. So, consider implementing your own transparent persistence framework after exhausting all other options.
Multi-layer object tree loading and storing requires optimization techniques
Business Object hierarchy and interrelations can be quite complex. When persisting a Business Object and its dependents, you might only want to persist those portions of the hierarchy that has been modified. Similarly, when loading a Business Object hierarchy, you might want to provide different levels of lazy loading schemes to load the most used part of the hierarchy first and lazy load other parts when accessed.
Improves understanding of persistence frameworks
If you are using a third party persistence framework, understanding Domain Store will greatly improve your understanding of that framework. You can compare and contrast how that framework implements transparent persistence with what has been described in Domain Store.
A full-blown persistence framework might be overkill for a small object model
For simpler persistence needs where you have a simple object model and basic persistence needs, a persistence framework using Domain Store may be an overkill. In such cases, a basic framework using Data Access Object might be adequately appropriate.
Improves testability of your persistent object model
Domain Store it lets you separate the persistence logic from the persistent business objects. This greatly improves testability of your application as you can test your object model without actually enabling and performing persistence. Since persistence is transparent, you can always enable it once you finish testing your Business Object model and business logic.
Separates business object model from persistence logic
Since Domain Store enables transparent persistence, your Business Objects do not need to contain any persistence related code. This frees the developer from dealing with the intricacies of implementing persistence logic for your persistent Business Objects.
Web Service Broker
Introduces a layer between client and service
Existing remote Session Façades need be refactored to support local access
Network performance may be impacted due to web protocols