The Leading Educational Resource for IT Professionals

Midrange MQ in an Open-Source World

by Victoria Mack August 19, 2016 0 Comments


MQ on IBM i continues to adapt to the needs of modern IT environments.


IBM MQ has been a familiar part of the corporate IT landscape for over 20 years. It’s been through a few name changes, but the fundamental idea of using asynchronous messaging to decouple communication between applications is as important now as it has ever been. Of course, over such a long period of time, there have been huge changes—in particular, the way that developers work using the Internet and open-source, and the rise of cloud computing. Therefore, we at IBM are doing many things in MQ to make sure that existing systems remain relevant and able to interact with the latest tools and platforms.


A Short History of MQ

Back in the mid-1990s, many corporate networks consisted of a mixture of networking protocols, including SNA, NetBIOS, and, of course, TCP/IP. It was relatively common for applications to use the network libraries directly, mixing up business logic and communications. Many large corporations created their own messaging middleware, but few really wanted to be in the middleware business. So, when MQSeries arrived offering a straightforward messaging API in a variety of programming languages that worked exactly the same regardless of the underlying protocol, it was a big hit.


Back then, the servers in the networks were much more varied than they are today. MQ provided a very easy way to connect between mainframes, midrange systems, and servers running a huge number of variants of UNIX. It handled differences in character and data encoding automatically so messages could be represented in the native formats of the sending and receiving systems, even if they differed in terms of character encoding or endianness, and MQ could do the conversion.


Networks also tended to be much less reliable than they are today, so the way that MQ was able to reconnect automatically and continue transmitting data without losing or duplicating messages was a big deal. It was tested in some really hostile environments, including a very dusty building in China where the networks were more down than up. But reliability and dependability were the result.


Another big deal was transactions. The ability to coordinate messaging operations with other resource managers such as databases meant that application programmers could use MQ as a reliable way to move financial data across networks without having to code complex retry and compensation logic.


Way back in the distant past, I was actually the software engineer who wrote the commitment control exit for MQSeries on OS/400 so that MQ could participate in OS/400 transactions. I even remember causing an emergency power-off repeatedly to test my code running during IPL. That’s a bit of niche skill these days.


Of course, it’s all different nowadays. The mainframes are still here, but they’ve had to modernize constantly. Almost all of the proprietary UNIX variants fell by the wayside, and Linux came to the fore. TCP/IP is everywhere and its flexible routing means that in principle everything can talk directly to everything else. And, as networks became more dependable, synchronous communication using SOAP and then REST offered an expressive way to build distributed applications.


There is still an important place for MQ, but you have to look a bit deeper to see why messaging makes sense in today’s environment.


A Crash Course in Messaging

IBM MQ is an example of message-oriented middleware. A queue manager is a server that offers messaging services to programs. Programs using messaging do not communicate directly with each other. Instead, they send and receive messages using the services of the queue manager. The queue manager stores the messages sent until they are received or routes them to another queue manager in the network. So, it’s a loosely coupled, distributed, asynchronous communication system.


There are two basic models for MQ messaging: point-to-point and publish/subscribe. In point-to-point messaging, the fundamental concept is called a queue. Sending programs put messages on queues, and receiving programs get messages from queues. Each message put onto a queue is consumed by a single receiving program.


In publish/subscribe messaging, the fundamental concept is called a topic. Sending programs publish their messages on topics. Receiving programs register their interest in messages by subscribing to topics and then get the messages that match their subscriptions. The key difference is that there may be zero or more subscribers for a particular topic. So a sending program has no idea how many receiving programs will get its messages.


Motivations for Using Messaging

You can clearly design systems whose components communicate in a variety of ways. If you think of communication in a system as a set of requests between components, synchronous communication such as a REST API feels very natural and is easy to understand. You might start off by waiting for the responses for all of the requests, but it relies on all of the communicating components being available at the same time.


An alternative way to think of the communication is as a flow of messages or events through the system. Often, requests will not have responses and, if they do, you don’t wait for them before proceeding. That’s naturally an asynchronous view.


In practice, a mixture of the two models is often appropriate, perhaps using synchronous communication within components and asynchronous communication between components.


So, let’s look at why you might use asynchronous communication and messaging in particular. The key concept is that the sender and receiver of a message do not need to be running at the same time or at the same rate. We call this “temporal decoupling,” and it’s useful in several situations.


Offloading Time-Consuming Processing

Imagine that you are writing a web application that needs to perform some time-consuming processing. People are impatient, so you need to keep the web application responsive.


A natural way to handle this is to make the communication with the time-consuming processing asynchronous. The web application sends a message to request the processing and then continues without waiting. In the meantime, one or more workers receive the messages from the web application and perform the processing. Of course, the web application needs to be designed up front with this separation in mind, but it can remain responsive for the users no matter how long the processing takes.


Smoothing Variations in System Load

If there are parts of your system that can be overloaded or perhaps that perform worse as the load increases past a certain point, using messaging can help here too. You can use techniques such as tuning the size of a pool of workers to the optimum size and using a queue of requests to deliver work to the workers.


Loose Coupling

Publish/subscribe is a very powerful technique that lets you adjust the number of receiving programs without modifying the sending programs. A key reason for this technique being helpful is that it lets you adapt to changing requirements. For example, let’s say that you want to start logging messages for auditing purposes or for performing some analysis of the messages. You can just add a new subscriber to an existing system without affecting the rest of the service.



This is the big one. In a large, complex system, there will be many components with different availability characteristics and maintenance schedules. If the system breaks whenever any of the components is temporarily unavailable, the overall availability would be dire.


One of the techniques for managing this situation is to use messaging to communicate between the components. In this way, when a component is down, you can just build up a queue of messages for it to process when it’s available again.


This is particularly useful when the component in question is actually managed by another company, such as when you’re calling an external web service. You can neither control nor guarantee the availability of another company’s service, and you can’t run a redundant copy of it, so a pragmatic way of handling this is to use a queue to build up requests for the external service and have a worker task that consumes the messages from the queue as it successfully calls the service.


MQ in a Modern World

Now that we have looked at the motivations for using messaging to improve responsiveness and reliability, it’s time to consider a couple of trends in modern IT environments and see how they affect messaging.


Hub Queue Manager

First, a very common pattern for deploying MQ these days is known as the hub queue manager. In the early days, it was common to run MQ applications on the same server as the queue manager. There were several reasons for this, including performance, but it was also not until WebSphere MQ 7.5 was released in 2012 that the extended transactional client became free of charge. Now, it was possible to include a client connection to a remote queue manager in a coordinated transaction.


Nowadays, it’s usual to run queue managers on separate servers and then connect applications to them using client connections. This tends to encourage consolidation onto a smaller number of more powerful queue managers. And those queue managers are best run on well-managed, dependable servers. That’s clearly a good role for a large IBM i server.


Once you have separated the queue managers from the applications, you get much more flexibility with regard to topology and availability. For example, you can run multiple instances of an application targeting different queue managers to let you keep the system running when you need to perform maintenance of a server.


A new feature in MQ V9 is the ability to access the client channel definition table (CCDT) over the network from a file server. This file contains the connection information used by the clients to connect to queue managers. Now, you can keep a central CCDT on a file server and update it as the network topology changes. Clients automatically pick up in the latest version over the network. This removes the need to distribute the CCDT to the clients, making it much simpler to manage the network.


New Application Environments

There’s a clear shift in the environments that developers are using to write new applications. While there’s still a lot of Java and C++ being written, Node.js and Python are becoming very popular. In order to ensure that MQ is relevant in these environments, we need an API.


Another shift we see is in the way that developers work. They tend to do all of their learning on the web, problem-solving as they go with little time to invest in learning complicated interfaces. This is particularly true of developers using Node.js. There is a wealth of packages available in the npm package manager and developers are very used to browsing for packages, downloading them, and learning how to use them on the web.


That’s why MQ has a new API called the MQ Light API. It’s specifically designed for this kind of developer. It’s very much easier to learn than the MQI or JMS so that you can pick up the essentials in a few minutes. It’s available for Node.js, Python, Ruby, and Java. Each of them is available in the specific language’s package manager: npm for Node.js, pip for Python, gem for Ruby, and Maven Central for Java.


Here’s an illustration of an MQ Light application for Node.js that publishes a message on a topic.


var mqlight = require('mqlight');
var client = mqlight.createClient({service: 'amqp://localhost'});
client.on('started', function() {
   client.send('greetings', 'Hello world!');


Simple, isn’t it?


Even though you can run Node.js, Python, and Ruby on IBM i, the MQ Light clients are unfortunately not available for IBM i. Instead, we have targeted the platforms most likely to be used by the application developers: Linux, Mac, and Windows. But queue managers on IBM i do support connections from MQ Light clients.


When the application teams start churning out new code in Node.js, there’s no reason at all why they cannot leverage your queue managers and exchange messages with existing applications written using the other MQ APIs and languages.


MQ Stays Current

Since its beginning in the early 1990s, MQ has become a critical part of the IT infrastructure of many of the world’s largest enterprises. Since its original release on MVS/ESA and OS/400, there have been massive changes, but progressive changes have ensured that it’s as relevant now as ever.


About the author: Andrew Schofield

Victoria Mack
Victoria Mack


Also in MC Press Articles

Bluemix: A Viable Option for Power Customers

by Victoria Mack August 19, 2016 0 Comments

Just what is Bluemix, and what could it mean for you? An interview with an IBMer reveals the answers.

steve pitcherWritten by Steve Pitcher

Last week, I sat down with Adam Gunther, Director of Cloud Developers Services at IBM, to talk about IBM Bluemix. I told Adam I wasn’t a developer up front, but I wanted him to explain just exactly how my small-to-medium-sized business with an investment in on-premises infrastructure could really take advantage of Bluemix. I wasn’t disappointed.

Continue Reading →

Using Scope in Linear-Main Programs to Create More Stable Applications

by Victoria Mack August 19, 2016 0 Comments

Linear-main RPG programs eliminate the RPG logic cycle and add new levels of variable scoping to protect your code from bugs down the road.

brian mayWritten by Brian May

While I am no expert in the RPG logic cycle, I have had to deal with it in older applications over the years. Most RPG developers have dealt with a logic cycle program at least once. I can honestly say I have never written a new logic cycle program, but I have seen others in the community doing it. This article is not intended to start a religious war about cycle programming. There are some who will never give it up. Instead, this article will demonstrate how to create a program without the logic cycle and concentrate on what I think is a very useful benefit to using linear-main procedures in program.

Continue Reading →

SQL 101: Date-Related Functions, Part 3 - Extracting Information from Dates

by Victoria Mack August 19, 2016 0 Comments

This article continues the date-related functions discussion, introducing a few more simple but extremely useful SQL functions: DAYOFWEEK, WEEK, QUARTER, DAYOFYEAR, and MIDNIGHT_SECONDS. Do you have time for some date fun?

rafael victoria preiraWritten by Rafael Victória-Pereira

Let me start with a quick flashback: an RPG Academy TechTip published in October 2015, explaining how to create an RPG function to calculate the day of the week of a given date stirred things up quite a bit. Some readers complained this kind of function was totally unnecessary, because SQL is better equipped to do this type of thing and so on. My reply was that I’d get to a point in the SQL 101 series in which I’d cover the “SQL version” of that particular function, named Clc_DayOfWeek.

Continue Reading →