Service integration buses
A service integration bus is a group of one or more application servers or server clusters in a WebSphere Application Server cell that cooperate to provide asynchronous messaging services. The application servers or server clusters in a bus are known as bus members. In the simplest case, a service integration bus consists of a single bus member, which is one application server.
Usually, a cell requires only one bus, but a cell can contain any number of buses. The server component that enables a bus to send and receive messages is a messaging engine.
A service integration bus provides the following capabilities:
Any application can exchange messages with any other application by using a destination to which one application sends, and from which the other application receives.
A message-producing application, that is, a producer, can produce messages for a destination regardless of which messaging engine the producer uses to connect to the bus.
A message-consuming application, that is, a consumer, can consume messages from a destination (whenever that destination is available) regardless of which messaging engine the consumer uses to connect to the bus.
Different service integration buses can, if required, be connected. This allows applications that use one bus (the local bus) to send messages to destinations in another bus (a foreign bus). Note, though, that applications cannot receive messages from destinations in a foreign bus.
An application can connect to more than one bus. For example, although an application cannot receive messages from destinations in a foreign bus, if the application connects to that bus, the bus becomes a local bus and then the application can receive messages.
For example, in the following diagram, the application can send messages to destination A and destination B, but it cannot receive messages from destination B:
In the following diagram, the application can send messages to, and receive messages from, destination A and destination B:
A service integration bus comprises a SIB Service, which is available on each application server in the WebSphere Application Server environment. By default, the SIB Service is disabled. This means that when a server starts it cannot undertake any messaging. The SIB Service is enabled automatically when you add a server to a service integration bus. You can choose to disable the service again by configuring the server.
A service integration bus supports asynchronous messaging, that is, a program places a message on a message queue, then proceeds with its own processing without waiting for a reply to the message. Asynchronous messaging is possible regardless of whether the consuming application is running, or whether the destination is available. Also, point-to-point and publish/subscribe messaging are supported.
After an application connects to the bus, the bus behaves as a single logical entity and the connected application does not have to be aware of the bus topology. In many cases, connecting to the bus and defining bus resources is handled by an application programming interface (API) abstraction, for example the administered JMS connection factory and JMS destination objects.
The service integration bus is sometimes referred to as the messaging bus if it provides the messaging system for JMS applications that use the default messaging provider.
Many scenarios require a simple bus topology, for example, a single server. If you add multiple servers to a single bus, you increase the number of connection points for applications to use. If you add server clusters as members of a bus, you can increase scalability and achieve high availability. Servers, however, do not have to be bus members to connect to a bus. In more complex bus topologies, multiple buses are configured, and can be interconnected to form complex networks. An enterprise might deploy multiple interconnected buses for organizational reasons. For example, an enterprise with several independent departments might want separately administered buses in each location.
Multiple-server bus with clustering
You can have a bus consisting of multiple servers, some or all of which are members of a cluster. When a server is a member of a cluster, it allows servers to run common applications on different machines. Installing an application on a cluster that has multiple servers on different machines provides high availability. If one machine fails, the other servers in the cluster do not fail.
When you configure a server bus member, that server runs a messaging engine. For many purposes, this is sufficient, but such a messaging engine can run only in the server it was created for. The server is therefore a single point of failure; if the server cannot run, the messaging engine is unavailable. By configuring a cluster bus member instead, the messaging engine can run in one server in the cluster, and if that server fails, the messaging engine can run in an alternative server.
Another advantage of configuring a cluster bus member is the ability to share the workload associated with a destination across multiple servers. You can deploy additional messaging engines to the cluster. A destination deployed to a cluster bus member is partitioned across the set of messaging engines that the cluster servers run. The messaging engines in the cluster each handle a share of the messages arriving at the destination.
To summarize, with a cluster bus member you can achieve high availability (through failover). You can also configure a cluster to achieve workload sharing or workload sharing with high availability, depending on the policies that you configure for the messaging engines.
Bus member types and their effect on high availability and workload sharing
You can add a server to a service integration bus, to create a server bus member. You can also add a cluster to a service integration bus, to create a cluster bus member. A cluster bus member can provide scalability and workload sharing, or high availability, but a server bus member cannot.
Adding a server to a bus
When you add a server to a service integration bus, a messaging engine is created automatically. This single messaging engine cannot participate in workload sharing with other messaging engines; it can only do that in a cluster. The messaging engine also cannot be highly available, because there are no other servers in which it can run.
Adding a cluster to a bus
A cluster deployment can provide scalability and workload sharing, or high availability, or a combination of these aspects. This depends on the number of messaging engines in the cluster and the behavior of those messaging engines, such as whether the messaging engines can fail over to another server, or fail back when a server becomes available again.
You can use messaging engine policy assistance to create and configure messaging engines in a cluster. The following predefined messaging engine policy types are available, which support frequently-used cluster configurations:
High availability. One messaging engine is created in the cluster. It can fail over to any other server in the cluster, so it is highly available.
Scalability. One messaging engine is created for each application server in the cluster. The messaging engines cannot fail over.
Scalability with high availability. One messaging engine is created for each application server in the cluster. Each messaging engine can fail over to one specific server in the cluster, creating a circular pattern of availability.
You can also use messaging engine policy assistance to create a custom messaging engine policy. You can create any number of messaging engines for the cluster, and configure the messaging engines as you require. The associated core group policies and settings for the messaging engines are created automatically.
If you do not use messaging engine policy assistance, when you add a server cluster to a service integration bus, a single messaging engine is created automatically. This messaging engine uses the default SIBus core group policy that already exists in WebSphere Application Server. The policy allows the messaging engine to fail over to any server in the cluster. You can then add further messaging engines if required. The cluster deployment depends on the number of messaging engines in the cluster and the policy bound to the high availability group (HAGroup) of each messaging engine.
If there is only one messaging engine in the cluster and you deploy a destination to that cluster, the destination is localized by that messaging engine. All messaging workload for that destination is handled by that messaging engine; the messaging workload cannot be shared. The availability characteristics of the destination are the same as the availability characteristics of the messaging engine.
You can benefit from increased scalability by introducing additional messaging engines to the cluster. When you deploy a destination to the cluster, it is localized by all the messaging engines in the cluster and the destination becomes partitioned across the messaging engines. The messaging engines can share all traffic passing through the destination, reducing the impact of one messaging engine failing. The availability characteristics of each destination partition are the same as the availability characteristics of the messaging engine the partition is localized by.
If you do not use messaging engine policy assistance, you control the availability behavior of each messaging engine by modifying the core group policy that the HAManager applies to the HAGroup of the messaging engine.
The simplest way to create and configure messaging engines in a cluster is to add a cluster to a bus and use messaging engine policy assistance with one of the predefined messaging engine policy types. If you are familiar with creating messaging engines and configuring messaging engine behavior, you can use messaging engine policy assistance and the custom messaging engine policy type. To add a cluster to a bus without using messaging engine policy assistance, you should be familiar with all the creation and configuration steps involved, for example, creating a messaging engine, configuring core group policies and using match criteria.