Skip to main content

Mainframes and Cloud Computing

We all know what mainframes are—the best and most secure, scalable, and available computing platform—but what exactly is the cloud? Cloud computing started off as design idea on diagrams for unspecified routing and hardware, that the viewer didn’t need to worry about. The important parts of the diagram were around the cloud. More recently, the cloud part of the diagram has been filled in and gained in importance. Nowadays, cloud computing is treated like a utility—gas, electricity, water—coming into your house. It’s simply on-demand data storage or applications and processing power, that you can turn on as needed (and pay for). So, how do the worlds of mainframe and cloud come together?

Mainframes: The First Cloud Computing Platform

Let’s start with the idea that mainframes were the first cloud computing platform. Those mainframers who have been around for a while will remember a time when mainframe processing power was “sold” to other departments within a company as way of making IT a profit center. In fact, there were computer bureaus that used to sell processing power and storage to other companies. These other departments and companies would only use their terminal from time to time during the week. They would log in from wherever they were and run their jobs, store the updated data and print off the results. And, at the end of the month, pay for the computing power they used. That’s very much like the cloud model where people can use apps or storage anytime they want and pay for what they’ve used. And the end users don’t need to worry about backups—that’s left to the service provider.

Cloud computing became popular with distributed systems users offering many things “as a Service,” such as Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). There are public cloud services from AWS, Azure, Google Cloud and IBM Cloud, and there are private cloud networks.

One problem that many companies are facing is “Shadow IT.” This is where departments within an organization are making use of cloud apps, but the central IT team is unaware of what they’re doing. This is not something that would work in a mainframe environment.

Combating Cloud Misconceptions

The thing about mainframes is that they work well. So, why would anyone consider moving to the cloud? Let’s look at applications first. Many CICS and IMS applications were written in COBOL many years ago and have been running pretty well ever since. There’s often a feeling among some executives that rewriting those applications in a more modern programming language would mean that it could be updated by any university graduate rather than some much older person who is looking to retire soon. They argue that it makes sense for the future of the company and the new app and the data can be stored in the cloud without having to worry about all the mainframe maintenance costs.

The argument against that is, I’m sure you know, that the application is probably complex, and it would be hard to get it successfully rewritten in the time available. And, secondly, in the days of APIs, it’s possible to plug into the existing application from mobile devices or the cloud and add extra features to modernize the application. So, cloud computing can become part of a modernization strategy, just not the whole thing.

A second argument for migrating to the cloud is that many mainframers are getting toward (or even past) retirement age and there aren’t enough younger people to actually take over maintaining the old programs that are being used or even to maintain the mainframe. It’s true that universities aren’t training people in mainframe skills—or very few are running mainframe courses. However, tools like Zowe, part of the Open Mainframe Project, make mainframes accessible and controllable by non-mainframe specialists, who can treat the mainframe like any other server.

Migrating to the cloud is often argued as a way of keeping costs down. Mainframes are too often viewed as expensive. Of course, what’s often not realized is that mainframes need far fewer people to run applications and services than non-mainframe platforms, which adds up to a huge saving. Only some of this saving is spent on hardware and software costs. It’s just with mainframes that all these costs appear on the same budget sheet.

The fourth argument is that mainframe users have to wait so long for updates to applications, whereas cloud users’ apps are being updated all the time. To a large extent this is true, but the introduction of Agile working and DevOps means that new releases are becoming available more frequently. And the use of DevOps is bound to grow. In addition, cloud apps using APIs into mainframe applications can themselves be updated to enhance the user interface or add some required functionality as required.

Cloud Providers and Tools

Looking at cloud providers, IBM has IBM Cloud, which it acquired in 2013 when it bought SoftLayer Technologies. It’s possible to run Z Development and Test (ZD&T), a mainframe emulator, as a virtual layer on this cloud platform. This became the IBM Z Trial Program, which allows people to select from 22 mainframe-oriented trials and use it for three days. Users probably have Windows or Linux machines at their end connected using a VLAN and can experience what using a mainframe is really like.

IBM Cloud also offers IBM Cloud Paks, Red Hat OpenShift on IBM Cloud, and IBM Cloud for VMware Solutions. IBM Cloud Paks are AI-powered software for hybrid clouds that can help users to implement intelligent workflows in their business to accelerate digital transformation. Built on Red Hat OpenShift, users can develop applications once and deploy them anywhere on any cloud. Cloud Paks are described as enterprise-ready, containerized software solution for modernizing existing applications and developing new cloud-native apps that run on Red Hat OpenShift. IBM Cloud Transformation Advisor can be used to help organizations with their move to the cloud. Red Hat OpenShift on IBM Cloud helps enterprises start the cloud migration process by creating cloud-agnostic containerized software. Developers can containerize and deploy large workloads in Kubernetes quickly and reliability.

Red Hat’s open-source platforms and tools are designed to work in a hybrid cloud environment. The thinking is that a consistent platform can exist across a variety of cloud deployments, that can be scaled up or down and is stable and secure.

It could be argued that it makes sense to move data, particularly data stored on tape, to the cloud. It then becomes easy to use the cloud for backups, archives, disaster recovery and space management. Plus, this data can then be used for analytics, which probably wouldn’t be possible if it was still stored on tape. So, for many mainframe sites, it’s probably worth exploring the case for this kind of cloud usage.

It’s worth noting that according to a recent Forrester survey, 85% of companies surveyed list on-premises as a critical part of their hybrid cloud strategy. It would seem that mainframes aren’t going away any time soon.

In conclusion, it seems that a hybrid of cloud apps and storage that’s plugged into a workhorse mainframe makes a lot of sense at the moment. Utilizing the best of each platform would be a sensible way to go forward.