This is a discussion on Configuring the new broker Bridge service between a OpenMQ 4.4 HA cluster and a singl - Solaris Rss ; The new OpenMQ 4.4 release has been eagerly looked forward to by myself and some of my customers for it's ability to bridge brokers without requiring a custom application to consume messages and forward it to a different broker. In ...
The new OpenMQ 4.4 release has been eagerly looked forward to by myself and some of my customers for it's ability to bridge brokers without requiring a custom application to consume messages and forward it to a different broker.
In the current setup at one of my customers we have a MQ HA cluster in one network segment and a single MQ node in a different network segment.
Two Composite Applications were made, one on each side, to consume messages and forward it to the node(s) in the other network segment.
The CA consumes messages from a number of destinations (typically queues) and wraps them as a payload in a common container message. This is functionality that we decided we'd do without, instead settling on just straight destination to destination forwarding.
Of course this functionality can be kept, by simply introducing a bridge from the CA's target destination to the corresponding target destination on the other side, thus allowing us to remain unaware on the destination to destination mapping needs in the different network segment.
As we decided to to without the source-> target mapping awareness it allowed us to simplify our deployment and remove the two Bridge CAs entirely.
There are several advantages with this setup:
As a sub-concequence of this the integration applications are less aware on the topology of the network they're deployed in, Composite applications consume and deliver messages to destinations, the broker arranges for the interconnectivity between nodes thus allowing the CAs to be unaware of the location of different subsystems.
- Lower overhead of message passing, no longer would we require a message to be consumed from a queue, de marshaled, inspected, wrapped and passed.
- Lower footprint on the application servers, doing without two CAs and their associated interfaces and requirements should lower the memory and performance footprint.
- Easier maintenance, it's way easier to introduce a new link in a Bridge configuration than maintain the increasingly complex composite applications.
Illustration of the BPEL of one of the Bridge composite applications:
So in having settled for using the new MQ 4.4 bridge it's time to introduce the Bridge.
The documentation is on http://docs.sun.com, more specific; Sun Glassfish Message Queue 4.4u1
Bridge: A service in a broker that consumes messages from one destination and delivers them to a different destination. Destinations in separate brokers can be bridged as well as internally in a broker. Bridges are managed by the imqbridgemgr utility.
The Bridge service is JMS 1.1 compliant, supports JNDI administrative objects, uses connection factories of type javax.jms.ConnectionFactory or javax.jms.XAConnectionFactory.
Each broker supports multiple (uniquely named) bridges with separate configuration and life cycles.
Link: In a bridge this is a mapping between two destinations. Links are unidirectional.
Although I won't discuss it here Links also support Message Transformers, allowing you to transform a message prior to delivery by extending com.sun.messaging.bridge.service.MessageTransforme r Complete API JavaDoc,
Object Store: This is a store for Administered Objects, objects that encapsulate provider specific configuration and naming information. This is where a node will store the Connection Factory for itself and the node it will send messages to.
Two different kinds exist, File (which I'll use) and LDAP.
Cluster: Groups of brokers working together to provide delivery services to clients. Clusters allow a message service to scale its operations to meet an increasing volume of message traffic by distributing client connections among multiple brokers.
A non-HA (also known as a conventional cluster) cluster provides message-service availability.
HA Cluster: Also known as an enhanced cluster. If a broker fails clients connect to a different broker in the cluster, which performs a takeover of the work the failed broker was involved in, and delivers messages uninterrupted to the clients. Enhanced clusters provide message and service availability.
The persistent message store for an enhanced cluster is maintained on a HA JDBC database. MySQL Cluster Edition (5.1.39-ndb-7.0.9), High Availability Session Store (HADB) (4.4.2, 4.5, 4.6) or Oracle Real Application Clusters (RAC) (10g and 11g) are supported.
Ok, next for the actual setup.
Please excuse my poor Inkscape drawing abilities, I sort of stopped drawing after moving away from DeluxePaint
For my setup the cluster nodes in c1 was run on one machine, but the broker's control port (cp, the portmapper ) and message port (mp) are bound to distinct ports. (Easier on firewall administrator's minds :) and to separate the processes from interfering with each other's ports
Each broker in C1 hosts one uniquely named bridge, but each bridge instance reads messages from the same destination name to ensure redundancy if one broker and it's bridge fails.
The drawing above also does not show the bridge from c2 to c1, this is detailed below.
c2 is in my setup comprised of one single node c2b1, but can easily be expanded to multiple nodes by repeating the setup that is done for c1
Download and install OpenMQ from https://mq.dev.java.net
Please note that the installer requires a 32bit JDK, I did this on a 64bit machine, to get the installer running install a 32bit jdk and set $JAVA_HOME to the 32bit one. The actual JDK to run the broker is selected during the install process and it may be any available JDK
For my install I did:
export JAVA_HOME=export JAVA_HOME=/usr/java/jdk1.5.0_15/ (set to wherever you have your 32bit JDK)
If you see something like:
java.lang.UnsatisfiedLinkError: /root/mq44/openmq4_4-installer/install/lib/external/charva/Unix/Linux/i386/libTerminal.so: /root/mq44/openmq4_4-installer/install/lib/external/charva/Unix/Linux/i386/libTerminal.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.jav a:1778)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java :1703)
at org.openinstaller.util.ui.ChaxStandaloneSplash.(Ch axStandaloneSplash.java:91)
at org.openinstaller.core.Orchestrator.main(Orchestra tor.java:428)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ e Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Native MethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(De legatingMethodAccessorImpl.java:25)
at org.openinstaller.core.EngineBootstrap.main(Engine Bootstrap.java:208)
SEVERE INTERNAL ERROR: /root/mq44/openmq4_4-installer/install/lib/external/charva/Unix/Linux/i386/libTerminal.so: /root/mq44/openmq4_4-installer/install/lib/external/charva/Unix/Linux/i386/libTerminal.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch)
You're using a 64bit JDK (verify with java -version and you'll see something like:java version "1.6.0_11"Java(TM) SE Runtime Environment (build 1.6.0_11-b03)Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)
As I installed on a headless machine I ran the installer with ./installer -tthis provides a text driven wizard.
In the installer I set the install home to /opt/sun/mq44, selected the 1.6.0_11 JDK I had available.
After installing OpenMQ on the two different machines I'd be using I started and stopped to brokers to get them to create the initial directory layout for the different nodes.
For machine A, which will be running c1
./imqbrokerd -name c1b1 -port 7777 and./imqbrokerd -name -c1b1 -port 8888
and on the macine running c2
./imqbrokerd -name -c1b1 -port 7777
The first thing I did was to set up the object stores for the different brokers.This is done using imqobjmgrI also created a directory to hold the object stores (I'm using a file based object store, not an LDAP based one)
note that parts of the DNS names for the machines have been obfuscated using # signs.
./imqobjmgr add -t xcf -j "java.naming.factory.initial=com.sun.jndi.fscontext .RefFSContextFactory" -j "java.naming.provider.url=file:///opt/sun/mq44/var/mq/instances/c1b1/c1b1_os" -o "imqAddressList=int-dev02.###.########.##:7777" -o "imqAddressListBehavior=RANDOM" -o "imqBrokerHostName=int-dev01.###.########.##" -o "imqBrokerHostPort=7777" -o "imqReconnectEnabled=true" -o "imqAddressListIterations=-1" -l c2b1cf
./imqobjmgr add -t xcf -j "java.naming.factory.initial=com.sun.jndi.fscontext .RefFSContextFactory" -j "java.naming.provider.url=file:///opt/sun/mq44/var/mq/instances/c1b1/c1b1_os" -o "imqAddressList=int-dev02.###.########.##:7777" -o "imqAddressListBehavior=RANDOM" -o "imqBrokerHostName=int-dev01.###.########.##" -o "imqBrokerHostPort=8888" -o "imqReconnectEnabled=true" -o "imqAddressListIterations=-1" -l c2b2cf
./imqobjmgr add -t xcf -j "java.naming.factory.initial=com.sun.jndi.fscontext .RefFSContextFactory" -j "java.naming.provider.url=file:///opt/sun/mq44/var/mq/instances/c2b1/c2b1_os" -o "imqAddressList=int-dev01.###.########.##:7777,int-dev01.###.########.##:8888" -o "imqAddressListBehavior=RANDOM" -o "imqBrokerHostName=int-dev01.###.########.##" -o "imqBrokerHostPort=7777" -o "imqReconnectEnabled=true" -o "imqAddressListIterations=-1" -l c1b1b2cf
note that on c2b1 the value of the property for imqAddressList is int-dev01.###.########.##:7777,int-dev01.###.########.##:8888 this is a , separated list of the nodes in the HA cluster.
Next up is the bridge definitions.A bridge and it's links is owned by one broker.For c1 this means we have to create one bridge in each of the instances. A bridge have to be uniquely named across a cluster, but there is nothing that stops you from consuming messages from the same destination(s) in both bridge instances. (As a side note, and I tested it on 4.4RC1 you can also create a simple load balancer in the bridge service by having a link that sends to multiple targets on different machines)
Bridge definitions can be described in the broker's config.properties file or using a XML file, I opted for the latter.
Here's the bridge definition for c1b1.I have annotated it with some comments to the different parts:
Notice that save for the bridge name it is identical to c1b1. This is because a bridge instance is owned by one broker.For failover if c1b1 goes down it reads the same destinations.
To complete the picture here is the bridge definition for c2b1:
Now that the bridges are created it's time to do the necessary changes in the broker's props/config.properties files
Ideally for c1 and c2 the properties common to the cluster should be split into a separate cluster configuration file that is accessible for both brokers, either through a shared file system or some other mechanism. I have not done this here, I'll leave it as an exercise for you if you want to do this.
Next up is the properties for the brokers themselves;
config.properties for c1b2:
#this is where i put the XML file containing the Bridge declaration
#which bridge does this broker instance own, do not move to common cluster properties file
#database persistence (see later section on configuration)
# binds the jms transfer to one port. Advantageous for firewalls :)
#changes the portmapper to 8888 from the default 7676
# I want this broker to set up an RMI enabled JMX connector
#The JMX service URL is shown if you do ./imqcmd list jmx
#use the service url to connect from a client, like JConsole or code.
The properties for c1b1 are identical, except to the port numbers used and the imq.brokerid property.Also refer to the Bridge instance c1_to_c2 that c1b1 will host.
Same for c2b1, the path to follow should be clear.For c2b1 also the imq.cluster.* properties are left out as it is not running in a cluster.
The properties file above also shows how to change the broker's persistence model from the default file based model to JDBC as well as how to enable JMX connectivity for the broker.
To create the database tables necessary after you've added the persistence configuration you can either use the imqdbmgr utility or simply start the brokers which auto creates the tables,
For the HA cluster the tables need an update.
./imqdbmgr create tbl -b c1b1 (no need for c12b)
./imqdbmgr upgrade hastore -b c1b1
The jar file for the oracle drivers (the exact jar varies with the JDK you use and db version) should be placed in mq/lib/ext
To start the brokers use the -name switch to start the correct instance.
./imqbrokerd -name c1b1
./imqbrokerd -name c1b2
And on c2b1
./imqbrokerd -name c2b1
To verify functionality and easily inject large numbers of differently sized messages into the broker setup you can use the uclient utility which ships in the demo directory in the broker installation