Skip to main content

The History of Business Computing Part 1: Counting the Benefits

How did we go from the joy of the freedom that personal, commodity computing brought us, to unidimensional cynicism about industrial-strength computing, valuing everything by gross costs without a consideration of the nature and amount of what was being delivered?
 
For that matter, how did we get to a place where requests for proposals (RFPs) and cost-benefits analyses seemed to intentionally fail to require those differentiators that would have immediately pointed to the IBM Z mainframe as the optimal environment for world-class workloads?
 
In this first of a two-part series, we’ll examine the background of how we got where we are, and begin to enumerate some strong examples of what we need to be specifying in order to bring cost-benefits analysis and RFPs to more accurately reflect what is available and its relevance and value.

‘The Unknown Computer’  

The world of consumer computing has been the predominant paradigm for most perceptions since at least the beginning of the 1980s, when TIME magazine declared the personal computer “Man of the Year.” During the four decades since then, two generations have grown up with the idea that a computer was what was on your desk or lap. And while some important innovations such as advanced GUIs have emerged as part of that environment, the foundational characteristics of production-quality computing have been substantially ignored, to the point that they’re not even part of the conversation. 
 
Today, cloud computing is the latest way we try to make any such details someone else’s problem. All production and quality considerations are effectively pushed into the realm of the unknown computer, while antivirus and identity protection products are the only pervasive nod to local responsibility.
 
If we were dealing with automotive machinery, this wouldn’t be a problem, because people implicitly understand the difference between personal transportation and industrial-strength equipment. But the relative novelty of computing and the consumer focus we have grown up with have eclipsed the important considerations for platforms that support business processing that has to meet far more stringent requirements than a commodity box with no foundational depth and solidity can respond to.

Better Defining Mainframe Benefits  

How sad and strange, then, that the very strengths of the definitive business computing platform have come to be seen as weaknesses because they force people to think too hard in order to properly appreciate them. The time has come to start telling and reminding people of the mandatory strengths that differentiate world-class business computing from commodity consumer electronics.
 
Take the data that follows in this article and in part two, and make it part of your daily discourse. We have to get the word out about the platform that is foundationally capable of handling the world’s most critical workloads, and the differentiators that are not just cost-benefits winners, but in many case unique values that are not even aspirational in commodity computing platforms.
 
It is, perhaps, an admission of what a long journey we have ahead of us to re-establish the irreplaceable value of IBM Z in contrast with commodity alternatives, that I must confess that the list that follows is merely a representative sample, and nowhere near comprehensive. 

IBM Z Strengths Spelled Out

To make it easier to digest and retain, I’m dividing these strengths up into the following categories:

  • Platform capacity and raw strengths
  • Unique features and services
    • Security-specific aspects
    • PDS and PDS/E-related capabilities
    • Other goodies
  • Design, principles, architecture
  • Culture, ecosystem, attitudes
  • Role in global economy

You’ll note that this is a sort of upside-down pyramid order, with the foundational aspects following technical specifics. In part, that’s to reflect the fact that we take that approach for granted with commodity computing platforms, which are historically constructed like inverted pyramids, with a light OS and hardware platform at their foundations, followed by decades of piled on additional features, resulting in what might be described as a sand-pile architecture.
 
To illustrate, consider UNIX, the foundation of all modern POSIX-compliant UNIX-like environments, including Linux and Mac OS. It was designed to be a light OS, capable of handling one workload at a time (hence the “UNI” as distinct from the MULITICS architecture that its authors learned OS concepts on). No matter how much hardening and elaboration was done, that basic design concept has continued to underly every such OS ever since.
 
IBM Z capacity 
 
One of the first differentiators to note about IBM Z is its sheer, cavernous capacity for data and numerous concurrent workloads on a single system, even moreso on a sysplex. No matter how many low-ceiling boxes you string together, they can’t coordinate and run massive workloads together anything like IBM Z. Of course, it’s typical for a non-Z platform to have one application, or more often part of one application, on a single image. Contrast that with the capacity to run hundreds or thousands of concurrent applications on a single z/OS environment, often sharing data and intercommunicating using means such as cross-memory services and WebSphere MQ.
 
Enabling that is a combination of 64-bit addressability, industry-leading virtual memory, and huge amounts of real memory backing it up—not to mention many processors working together in a single coordinated environment. And of course WLM to make it all sing, even at 100% utilization, while other platforms are on their knees if they get over 30% busy. 
 
But memory capacity, workload capacity and multiprocessing power aren’t close to the entire story. Data throughput allows the mainframe to handle vast amounts of data—for example, real-time processing of nearly all credit card transactions on Earth—arriving and leaving with trivial response time.
 
One consequence of that profound capacity in a single place is the ability to host massive numbers of concurrent OS images and containers—often running Linux—which only use the CPU when performing tasks, and are otherwise dormant. This allows for a wave-cancellation effect that enables running tasks to have massive capacity, and not waste it when not using it.
 
Cost savings
 
Speaking of power, and size, one way the mainframe is overwhelmingly underwhelming is in its physical demands. Not only does it use less electrical power per unit of work, and weigh less, it now even takes less space, with the option of using no more than a single raised-floor tile for a z15 that I’ve nicknamed the “Cinderella footprint.”
 
Not only is the mainframe the least expensive platform per raw capacity as soon as you are using the capacity you’re paying for, and not only is that full cost knowable as distinct from any other platform, but as we’ll see, some additional features that are included with that price point are not even considerations on other platforms.
 
But that inaccurately assumes you don’t already have a mainframe and are making an apples-to-apples comparison about adding new workloads. The fact is, the incremental cost for adding additional new workloads to a mainframe environment you already have is microscopic compared to the all-in price for a new production workload on any other platform that includes acquiring new hardware, installing and configuring it, licensing additional software, and all the additional considerations for a brand new set of boxes, none of which are relevant if you’re just adding some additional processing to an existing Central Electronic Complex (CEC).

Unique Features and Services

The number of unique abilities that z/OS offers that aren’t even in the aspirational imagination of other platforms is nearly limitless. Here are some definitive differentiators:

Security-Specific Aspects 

I am sorry to say that most mainframe shops are hackable. But I’m happy to affirm that they don’t have to be. IBM’s statement of integrity, which has been around since the early 1970s, affirms that the mainframe is designed to only do what you specifically configure it to do. You just have to have the time, the will, and the expertise to configure it that way. And that includes ensuring your APF libraries are tightly secured so only genuinely trusted programs can run from them and perform system-level functions.
 
I am also sorry to say that many people and organizations rely on security by obscurity to protect them from the gaps that they should be closing. But I’m happy to recognize that there’s nothing outside of the mainframe that can touch any of the three Enterprise Security Managers (ESMs) that run on z/OS: IBM’s Resource Access Control Facility (RACF), Broadcom’s CA Access Control Facility 2 (CA ACF2), and Broadcom’s CA Top Secret Security (CA TSS). 
 
While these products don’t address every configuration issue on the mainframe, the policy-based external security they offer for system entry validation (logging on) and resource access control are definitive and unparalleled.
 
And, among many other strengths, they allow for the logging of a wide range of security matters, including logons and logoffs, invalid access attempts, and auditing of sensitive resources and IDs. This can be coordinated with SMF data and even sent off-platform to a SIEM solution for real-time enterprise security event monitoring.
 
Stay tuned for Part 2, when we complete this review of definitive mainframe differentiators.