Documente Academic
Documente Profesional
Documente Cultură
21
0
Being able to ensure business continuity is one of the main goals of any IT depart- ment.
Your Mule-driven projects will not escape this rule. Depending on the criticality of the
messages thatll flow through your Mule instances, youll probably have to design your
topology so it offers a high availability of service. High availability is gener- ally attained
with redundancy and indirection. Redundancy implies several Mule instances running at
the same time. Indirection implies no direct calls between client applications and these Mule
instances.
An interesting side effect of redundancy and indirection is that you can take Mule instances
down at any time without negative impact on the overall availability of your ESB
infrastructure. This allows you to perform maintenance operations, such as deploying a
new configuration file, without any downtime. In this scenario, each of
App
App
App
App
App App
Client
applicatio
n
Client application
the Mule instances behind the indirection layer is taken down and brought back up
successively.
BEST PRACTICE
hot deployments.
Using a network load balancer in front of a pool of similar Mule instances is probably the
easiest way to achieve high availability (see figure 8.12). Obviously, this is only an option
if the protocol used to reach the Mule instances can be load-balanced (for example,
HTTP). With a network load balancer in place, one Mule instance can be taken down, for
example, for an upgrade, whereas the client applications will still be able to send messages
to an active instance. As the name suggests, using a load bal- ancer would also allow you to
handle increases in load gracefully; itll always be possi- ble to add a new Mule instance in
the pool and have it handle part of the load.
Another type of indirection layer you can use is a JMS queue concurrently con- sumed by
different Mule instances. No client application will ever talk directly to any Mule
instance; all the communications will happen through the queue. Only one Mule
instance will pick up a message thats been published in the queue. If one instance
goes down, the other one will take care of picking up messages if your JMS middleware
supports
the
competing
consumers
pattern
(see
www.eaipatterns.com/
CompetingConsumers.html). Moreover, if messages arent processed fast enough, you can
easily throw in an extra Mule instance to pick up part of the load. This implies that
youre running a highly available JMS provider that will always be up and available for
client applications. The canonical ESB topology, represented in figure 8.13, can therefore
be easily evolved into a highly available one.
If your Mule instance doesnt contain any kind of session state, then it doesnt mat- ter
where the load balancer will dispatch a particular request, as all your Mule instances
are equal as far as incoming requests are concerned. But, on the other hand,
Client application
JMS destinations
JMS provider
JMS destinations
JMS destinations
Mule standalone app
Flow
Flow Flow
Flow
if your Mule instance carries any sort of state (for example, idempotency, aggregators,
resequencers, or components with their own state) thats necessary to process mes- sages
correctly, load balancing wont be enough in your topology, and youll need a way to share
session state between your Mule instances.5 This is usually achieved either with a shared
database or with clustering software, depending on what needs to be shared and
performance constraints.
Note that as of this writing, theres
Distributed shared memory
no officially supported clustering mechanism
for the Mule community edition; you can
work around some of
Mule standalone server
the clustering limitations of the com- munity Mule standalone server
Mule app
Mule app
edition using the object stores, as youll learn
in the next section. The
Mule app
Mule app
Mule app
Enterprise Edition, however, has full- fledged Mule app
support for clustering.
Using the Mule Enterprise Edition, all Mule
features become cluster
aware in a completely transparent
fashion. A cluster of Mule Enterprise Edition
servers will create a distrib- uted shared
memory, as you can see in figure 8.14, thatll Client application
contain all the nec- essary shared state and
coordination
systems to cluster a Mule application without
a specific cluster design in the
Figure 8.14
instances
Client
applicatio
n
Client
application
5 One could argue that with source IP stickiness, a load balancer will make a client stick to a particular
Mule instance. This is true, but it wouldnt guarantee a graceful failover in case of a crash.
Mule application. To learn more about Mule Enterprise Edition, the key differences between
it and the community edition, and how it can help you with easier clusteriza- tion, you can visit
the Mule Enterprise Edition site (www.mulesoft.com/mule-esb- enterprise).
At this point, you should have a good understanding of whats involved when designing a
topology for highly available Mule instances. This will allow you to ensure continuity of
service in case of unexpected events or planned maintenance.
But its possible that, for your business, this is still not enough. If you deal with sen- sitive data,
you have to design your topology for fault tolerance as well.
8.5.1
Mule application
Flow
Filesystem
Message
arrives at the idempotent
No
Mule object store, and by using the Redis connector, the usage of Redis as a Mule object store
is straightforward.
Given that Redis will represent an external highly available object store, you can include it
in the previous design of our
HA Redis provider
application (in figure 8.16) to store the internal
state of your Mule moving parts, as you can see in
figure 8.18.
The Prancing Donkey commitment to high
availability is unavoidable. Theyve decided to
Flow
Mule application
use a highly available configured Redis server to
store the internal state of
Flow
Flow
some moving parts of Mule. Start the highavailability implementation by configuring the
connectivity with Redis:
JMS queues
HA JMS provider
This will configure a local nonpasswordprotected Redis instance running on the standard Figure 8.18 An HA JMS provider can host
port, so it should connect straight to a brand-new queues for all communications within a Mule
instance.
Redis installation. The server in
production will be placed in a different host and will be strengthened with a password. Youll
eventually use the host, port, and password attributes to connect to the server.
Now youre ready to use the Redis connector to store the internal state of your processors. Configure the previously mentioned idempotent filter to use Redis as an object
store, as in the next listing.
Listing 8.8 Configuring an idempotent filter to use Redis as an object store
<idempotent-message-filter
idExpression="xpath('/order/id').text">
<managed-store storeName="localRedis" />
</idempotent-message-filter>
Here you declare an idempotent filter almost identical to the one configured in sec- tion
5.2.5. The only exception is found at B, where you instruct the filter to use the object
store with an ID equal to localRedis, which you declared before.
Redis isnt the only option that implements an object store; another available extension is, for
instance, the MongoDB connector. The Mule Enterprise Edition sup- plies a myriad of other
options such as JDBC or Spring cache-based object stores. But not every possible solution is
covered as a Mule connector or as an Enterprise Edition feature. Youll learn how to
implement your own object store in section 12.3.4.
Youve seen that shooting for fault tolerance can be achieved in different ways with Mule,
depending on the criticality of the data you handle and the availability of trans- actions for
the transports you use.