The Leading Educational Resource for IT Professionals

Three Compelling Drivers for Implementing a High Availability Solution on an IBM i Cloud with MIMIX

by Victoria Mack May 02, 2016 0 Comments

The emergence of the IBM i Cloud along with dramatic changes in costs have made downtime-reducing solutions accessible for companies of all sizes.

Until recently, high availability solutions for IBM Power Systems servers running IBM i were reserved mostly for larger enterprises with on-premise or dedicated hosting solutions. Given the emergence of the IBM i Cloud, high availability is now dramatically easier to use and a less expensive alternative for organizations of all sizes. Today, just about anyone running an IBM i environment can now afford the “luxury” of real-time, offsite data protection, as well as rapid and complete data recovery in the cloud.

Fortunately, this shift is occurring just as downtime is causing more of a disruption and expense to businesses than ever before. With technology costs dropping and downtime costs skyrocketing, all organizations have a huge incentive to evaluate high availability technology.

This white paper is a collaborative effort between Connectria Hosting, a pioneer in the development of the IBM i Cloud, and Vision Solutions, the leader in High Availability and Disaster Recovery solutions, including MIMIX, the standard for complete, scalable HA/DR protection for the IBM i. It will provide a review of the core causes and costs of both planned and unplanned downtime and will then provide a detailed discussion of current options for IBM i High Availability and Disaster Recovery in the cloud. Most importantly, as you read, you will learn why true HA and DR protection are now within reach of even the smallest of businesses.

RPO vs. RTO

Before looking more closely at the cost factors of high availability (HA)—and why each has changed so significantly—it is helpful to first understand the concepts of recovery time objectives (RTOs) and recovery point objectives (RPOs).

The graph in Figure 1 shows a variety of common IBM i business continuity technologies in which one axis indicates the time it takes to recover data after a failure/disaster (RTO), and the other axis indicates the completeness of data that is ultimately recovered (RPO).

 

050216ConnectriaFigure1

Figure 1: RPO and RTO and the spectrum of IBM i DR solutions/strategies

At the low end of the disaster recovery (DR) spectrum is tape backup (basic availability), and at the high end is high availability (HA)—a process more technically known as logical data replication-plus-switchover (LDR+Switch), which rapidly moves users and processes to a fully mirrored secondary server in order for it to assume all or most of the functions of the production server.

Unfortunately, the perception of companies is that HA technology is so much more expensive than basic disaster recovery protection that it is considered “out of reach” in terms of both cost and complexity. But, in line with most other computing technologies, the range of options between the most basic DR protection and the high-end, fault-tolerant, enterprise-scale solutions has increased, and overall, the cost of all the options has come down, radically in some cases. When introducing the option of an HA solution via a hosted cloud, the price points become decidedly more attractive.

High Availability Cost Factors

High availability is certainly not “cheap” when you consider all of the components that are needed. What has changed is how the cost of each of these factors—each for its own reasons—has dropped. Here are the major components that contribute to the cost of an HA solution.

Hardware—A second IBM Power Systems server is needed, with enough capacity to accommodate the storage of replicated data and potential production demands. For instance, depending on how fully you want to run your applications from your backup environment during planned and unplanned downtime, this server may need to handle the same scale of transaction volumes and devices supported by the production machine. If less than full capability is acceptable during downtime, adjustments can be made. But in the end, a second server, ready to run, is a must.

Communication Bandwidth—If the second Power Systems server is located offsite, which is what is necessary to have true disaster recovery protection, then sufficient communication capacity (bandwidth) is needed to accommodate the amount of data flowing to it from the production machine. This includes the I/O processing capacity of the backup server and the communication lines between sites.

High Availability Software—This component executes, manages, and monitors the replication or mirroring of designated business-critical data to the backup server. It also provides the ability to efficiently move users and processes to the backup server during downtime events. In addition to the initial purchase cost for this software, annual maintenance contracts and installation and training costs must be considered.

High Availability Management—As with any other infrastructure software or system, some level of staff time is required each day to monitor and manage the data replication processes to ensure that the mirrored data is accurate and usable when needed. In part, the amount of time needed for this task depends on the scale of your environment. But the self-managing capabilities of the HA software can have an even bigger impact. Even large-scale HA environments can be easy to manage, with the right software.

So what has changed? Why should you reconsider whether you can justify investment in a true high availability solution? Here are three reasons:

Want to find out more? Download the free white paper, “Three Compelling Drivers for Implementing a High Availability Solution on an IBM i Cloud with MIMIX, from the MC White Paper Center.

About the Author: Connectria Hosting




Victoria Mack
Victoria Mack

Author



Also in MC Press Articles

Bluemix: A Viable Option for Power Customers

by Victoria Mack August 19, 2016 0 Comments

Just what is Bluemix, and what could it mean for you? An interview with an IBMer reveals the answers.

steve pitcherWritten by Steve Pitcher

Last week, I sat down with Adam Gunther, Director of Cloud Developers Services at IBM, to talk about IBM Bluemix. I told Adam I wasn’t a developer up front, but I wanted him to explain just exactly how my small-to-medium-sized business with an investment in on-premises infrastructure could really take advantage of Bluemix. I wasn’t disappointed.

Continue Reading →

Midrange MQ in an Open-Source World

by Victoria Mack August 19, 2016 0 Comments

MQ on IBM i continues to adapt to the needs of modern IT environments.

andrew schofieldWritten by Andrew Schofield

IBM MQ has been a familiar part of the corporate IT landscape for over 20 years. It’s been through a few name changes, but the fundamental idea of using asynchronous messaging to decouple communication between applications is as important now as it has ever been. Of course, over such a long period of time, there have been huge changes—in particular, the way that developers work using the Internet and open-source, and the rise of cloud computing. Therefore, we at IBM are doing many things in MQ to make sure that existing systems remain relevant and able to interact with the latest tools and platforms.

Continue Reading →

Using Scope in Linear-Main Programs to Create More Stable Applications

by Victoria Mack August 19, 2016 0 Comments

Linear-main RPG programs eliminate the RPG logic cycle and add new levels of variable scoping to protect your code from bugs down the road.

brian mayWritten by Brian May

While I am no expert in the RPG logic cycle, I have had to deal with it in older applications over the years. Most RPG developers have dealt with a logic cycle program at least once. I can honestly say I have never written a new logic cycle program, but I have seen others in the community doing it. This article is not intended to start a religious war about cycle programming. There are some who will never give it up. Instead, this article will demonstrate how to create a program without the logic cycle and concentrate on what I think is a very useful benefit to using linear-main procedures in program.

Continue Reading →