Six Defining Characteristics of z/OS and Mainframe
IBM Z mainframes and z/OS are important, still carrying out critical back-end processing after nearly six decades. But some things about mainframes seem strange. I mean, why the obsession with batch? And why do sites have so few z/OS systems?
Let’s look at six characteristics of the mainframe and z/OS that help explain what it is—and what it isn’t—and why these characteristics can be a good thing.
1. Big, Not Small
Yep, mainframes are big. And expensive. Always have been.
In the past, big mainframes were the only tool that could do the things companies needed. And most sites could only afford one mainframe, maybe two. In those days, you could only have one MVS (as z/OS was called) system per mainframe. So, you packed all your applications into those one or two MVS systems.
It’s not much different today. Sure, mainframes are smaller, and each can run multiple z/OS systems. But sites still have only a couple z/OS systems at most. I’ve seen z/OS systems with more than a hundred applications. Compare this with Windows and Unix servers, of which many sites have thousands.
This centralized approach has a lot of benefits. It’s far easier to manage the hardware and software. There are centralized configuration parameters, centralized locations for logs and centralized security and access recording. Data and data backups, change management, source code repositories and more are all centralized. One downside: costs and bills are centralized as well.
You may think that this centralization makes mainframes more likely to crash—any one of all those applications could crash or fail and cause mayhem. But the opposite is true. IBM and other vendors have had decades to think of ways to make z/OS the most resilient and secure platform available. And they have.
2. Critical, Not Optional
In 2018, the National Australia Bank’s (NAB) mainframe crashed, locking customers out of their bank accounts for a few hours. NAB paid something like $7.5 million in compensation.
Mainframes have always done critical back-end processing for businesses that simply cannot stop for long periods of time. So, technical development over decades has been making mainframes more resilient. Today, nothing is more reliable than a mainframe.
It doesn’t stop there. Mainframe support staff have been trained for decades to keep mainframes running. If you look at availability statistics in most mainframe sites, mainframes will have fewer outages than other platforms.
The importance of mainframes can be a two-edged sword. Many mainframe sites are hesitant to change the mainframe or put new things onto it. Any changes often must go through more rigorous change management processes than other platforms.
3. Batch, Not Online
z/OS was originally a batch machine. Pre-punched paper cards went in, results came out. New-fangled ideas like terminals and screens came later. So, batch has been a big thing for mainframes from the beginning.
Although online workloads eventually became cool, batch never went away. Mainframes may be big, but they’ve rarely been big enough—users have always wanted more. So, running workloads that are not time-critical in the background has always been a smart idea.
Because batch is important, z/OS has acquired sophisticated tools for managing batch jobs and streams. JES classes can limit the number of jobs running at one time, WLM can determine how much CPU a batch job will get. Other vendors have created fancy batch automation tools. We can start a job at a set time, when another job (or jobs) has completed, or when a file is received. If a job fails, these tools let the right people know automatically.
A big advantage of batch is that z/OS systems can run at 100% CPU capacity; onlines do what they do, batch uses the rest.
4. Legacy, Not Brand New
Since the 1970s, IBM has promised that today’s application programs will work on tomorrow’s mainframe. And they’ve kept that promise: I’ve seen programs from the 1970s that still work fine. Mainframe sites still rely on those COBOL programs from the 1980s.
This backwards compatibility can make it harder to implement new technologies and features. For example, we may want to use mainframe data from Windows, but this isn’t easy if that data is in VSAM data sets. We may want a new Java application, but it’s an issue if it needs to work with that 1980s COBOL program.
IBM is on this, regularly adding new features to z/OS. Recent innovations include Docker containers for z/OS, Node.js applications and a Python SDK. It’s easy to think of IBM mainframes as dinosaurs, but they’re continually changing. Innovation can be harder on mainframes, but it’s not impossible.
5. Commercial, Not Mathematical
IBM mainframes are not designed for complex calculations. Mainframes have never been popular with scientists. Even today, data mining and other processing of large quantities of data is usually done on another platform.
IBM mainframes are built for transaction data processing: look at data, change the data. They’re perfect for commercial computer applications like banks and insurance companies. Because they’ve been doing this for decades, their technical development has been aimed at transaction performance and efficiency.
For example, IBM achieved a transaction rate of 227,000 transactions per second in 2016, and it’s likely higher now. But more importantly, mainframes can process this rate reliably. The transaction will work. If it doesn’t, incomplete transactions will be backed out to ensure data integrity.
6. CISC, Not RISC
Your mobile phone is probably running an ARM processor: a reduced instruction set computer (RISC) CPU. RISC CPUs only have a small number of instructions—130 for the AMD A32. The idea is that complex tasks are done using multiple instructions, allowing CPUs to be smaller and simpler.
IBM mainframes have taken another path, using a complex instruction set (CISC) processor. The z16 mainframe has almost 1,250 instructions, and new ones are being added all the time. The latest z16 mainframe added another 30 instructions to the previous z15.
IBM’s idea is to increase speed by doing complex tasks in a single instruction. For example, there is an instruction that can move any number of characters to a different area of memory (MVST).
z/OS and other subsystems can use these instructions to get better performance and use less CPU. The SORTL instruction is an example. Introduced with the z15 mainframe, this instruction sorts data using the IBM Integrated Accelerator for Z Sort. This can reduce sort elapsed times by 40% and CPU usage by 60%.
IBM and other vendors benefit from these new instructions. For example, the PL/I, COBOL and C/C++ compilers can specify an architecture level, telling the compiler if new, faster instructions can be used. This is one of the benefits or upgrading to newer mainframe models: newer instructions become available.
A Robust Environment
Mainframes and z/OS have been doing critical work for a long time. This history has determined where and what it is today. It’s why sites have so few z/OS systems, don’t use mainframes for complex mathematical processes and still love batch. It explains why mainframe geeks get excited with the latest mainframe processors and why it seems so hard to get z/OS to do something new. But this history gives us a resilient, efficient, secure system that continues to reliably support mission-critical workloads.