In Part 1 of this article, I set the stage for recognizing that the IBM Z mainframe has many relevant differentiators that need to be specifically accounted for in RPFs and cost-benefits analyses in order to create a fair playing field where implicit strengths of the mainframe are not assumed as being part of other platforms.
In Part 2, I’ll conclude with the remaining representative sample, beginning with all of the ways we use our libraries.
PDS and PDS/E Libraries
The first thing the mainframe does differently and better is Partitioned Data Set (PDS) and Partitioned Data Set/Extended (PDS/E) libraries. UNIX/Linux and Windows folks will wave multilevel recursive directory trees with long names in your face, but no matter how much they fiddle with paths and sticky bits, they can’t touch the deep versatility of these powerhouses of processing.
Of course, from an initial glance, PDS and PDS/E libraries are just one-level-deep directories with member names that are a maximum of eight characters long. But what z/OS has done with them is truly advanced beyond the basic pathing capabilities of other platforms. And the first thing is concatenation: the ability to treat multiple libraries, grouped in a specific order, as if they were a single dataset, with the first in the group being the first to be searched for a given member, and on down to the last if a given member has not yet been found. (Note that some datasets which are not libraries may also be concatenated together and treated as a single dataset.)
The ability, in JCL or dynamically, to have a set of PDS and PDS/E libraries concatenated to the same allocated file (i.e. DD) name, and not just for executables but for any utility purpose such as text members (e.g. JCL, CLIST, REXX, configuration), and to have that for any number of different file names in a single executing step, is uniquely powerful. One fun way to play with this and appreciate it more is, under ISPF, to issue the “TSO ISRDDN” command.
In some ways the ultimate concatenations are the PROCLIBs, LINKLIST and LPALIBs, which allow system-wide access to JCL PROCs and load members (i.e. executable programs) which are available to be run from anywhere, and may even be kept in memory (LPA) for any task to run without need to locate or load them.
Overlapping with those concatenated libraries are authorized libraries—designated data sets that contain programs that provide system-level functions, and have to be authorized to do so. By specifying which trusted data sets are allowed to have such programs, greater security control is maintained.
SMF, GDGs and Other Goodies
Then there’s System Management Facility (SMF). These are detailed accounting-type records that can be created to keep track of a wide range of metrics on z/OS, from resource usage, to security activities and events, to basically anything else that happens on the mainframe that you might want to know about in the future. This data is used for many of the unique abilities of the mainframe, from chargeback and capacity planning, to security reporting, to storage and network and other management, to automation and performance monitoring. And the system logger, which is in some ways descended from SMF and in other ways became its new optional destination, has elaborated that ability to allow logging of a wide range of data, and then optionally concentrating it across a sysplex or even forwarding it to an off-platform Security Information and Event Management (SIEM) solution.
Not to be confused with SMP—now SMP/E, the System Modification Program. While not every mainframe product can be installed using this system, it’s the default, and while it’s not exactly a wizard—and has sometimes needed human wizards to make proper use of it—it’s the foundation of most software maintenance on the mainframe, and a key aspect of ensuring quality installations.
And then there’s Generation Data Groups (GDGs). That ability to have multiple generations of a given data set be able to be referred to by a relative number, or concatenated together as a single data set, just by having the same data set name prefix, followed by G####V## (where # refers to decimal digits) for each data set name, and having the ability to dynamically allocate, use, and even delete members by a name that substitutes “(0)” and “(+1)” and “(-1)” and so on for the last part of the data set name. Not to mention being able to specify the maximum number of generations allowed, and what is to happen when that number is exceeded.
Speaking of seamless performance, one of the great strengths of the mainframe since the earliest days is the ability to store a wide range of data in shared memory that is accessible from every running address space. That even includes the ability to explicitly establish cross-memory communication between two address spaces. Add in a sysplex coupling facility, and additional cross-memory activities can occur between separate z/OS images that may not even be running on the same physical piece of hardware.
And one of the cool things you can do with cross-memory services and with sysplexes is in-memory TCP/IP networking between OS images (including Linux, z/VM and more) using HiperSockets—in-memory TCP/IP connections that never go outside of the Central Electronics Complex (CEC). Not only does that bring massive performance advantages compared to having to send data outside of the computer, it greatly enhances security for the same reason.
IBM Z Design, Principles and Architecture
Earlier, I pointed out that the IBM Z mainframe has a massive capacity for workloads and can run at 100% busy without degradation. I also noted the mainframe’s profound throughput. But there’s another dimension at work here: dedicating the main CPUs to business processing by offloading utility tasks such as data movement using secondary processors and other devices such as controllers. This architectural advantage allows the full power of IBM Z CPUs to be dedicated to workloads, and not having it constantly interrupted by talking to hardware and shepherding data back and forth.
This centralized perspective on processing extends to the deeper philosophy of centralized computing. By concentrating workloads on a single CEC, this architectural advantage gives all the above-mentioned performance advantages, plus also management advantages. Not only are fewer technologists required to manage a much larger amount of workload due to consistent configuration (and far fewer configuration points), and not only is the consistent configuration an enabling factor in being able to do regular disaster restore tests, but it gives a wide range of manageability advantages that constantly-shifting commodity configurations prevent in distributed and consumer platforms.
Of course, one of the implications of this is the advantage it brings to outsourcers who can manage many customers very efficiently using a relatively small set of configurations.
It likewise brings advantages in keeping sandbox and development/testing environments consistent with the configuration of production environments, reducing the prospects of configuration mismatches that could otherwise lead to invalid testing.
Many of these benefits hearken back to the original design principles that had their roots in the mandate for deep frugality given the paucity and expense of resources on early computers, which were perpetuated by the constantly increasing demands that have continued to keep pace with the capacities of even the largest mainframes. Every bit has to pay its own way, and there’s no room for waste or bloatware.
And yet, there is one spectacular inefficiency that the mainframe was designed to offer from the very beginning: decimal data processing. By allowing character-based decimal numbers to be processed as numbers, or packed as two decimal digits into a single byte, while there is clear waste of 156 possible numeric value in the range of 00 to 99 hexadecimal, it has ensured throughout the decades that financial data never suffered from the artifactual loss of value from conversion of decimal places from fractional binary values. And, like everything else on the mainframe, this technical advantage hasn’t stood still, with modern decimal vector math instructions providing orders of magnitude advances in processing speeds.
One other way the IBM Z platform hasn’t stood still has been its capacity to adopt and adapt the best of business-enabling innovations from other platforms. Everything from the latest machine language instruction execution sequence to both UNIX System Services and Linux, and numerous other advances, have ensured that IBM Z lacks nothing of relevance for the most advanced hosting of the world’s most critical workloads.
Mainframe Culture, Ecosystem and Attitudes
Yet, for all these unique technical capacities, the most important aspect of the IBM Z environment is its humanity: the ecosystem, the business technologists, the culture, the history and the disciplines.
In fact, if you want to form a true computing professional, the high road is to give them a full immersion initiation into becoming a mainframer. The attitudes, culture, disciplines, insights and habits they will garner will distinguish them from any colleague who just learned on distributed platforms. And what will they learn?
They’ll learn to plan, test, do change control, have a backup plan as well as implementation plan and take personal responsibility for any changes they make to the system.
They’ll learn that rebooting (IPLing, or doing an Initial Program Load) isn’t an option when things go sideways, and every measure must be taken to avoid even the possibility that it might become necessary.
They’ll learn their discipline in-depth, but also the entire environment, including the other disciplines they’ll interact with, as well as jargon, history and local culture.
They’ll learn that the value of what they’re doing isn’t measurable in bits, bytes, bells, whistles or sizzle. It’s about serving the business needs of the organization that’s paying the bills.
If they stick around long enough to get involved in the ecosystem, including user groups such as SHARE, CMG, GSE and email lists such as IBM-MAIN, they’ll come to have a deep insight into how everything fits together in a way that goes far beyond merely technical.
The IBM Z Platform’s Role in the Global Economy
The entire global economy is integrated through the mainframe. As I alluded to earlier, you can’t even make an online credit card purchase without the likelihood of passing through at least one mainframe. And if there’s any information stored about you on a government or banking or insurance computer or the like, if it’s the information of record, it’s probably on IBM Z.
In other words, this is the place where you’ll find the organizations that shoulder the systems of record worldwide.
And when you’re putting together an RFP for a new platform to host one or more workloads, or doing a comparison between platforms to decide on the optimal fit, if you don’t take the above into account, the answer you get cannot be world class.
Time to sharpen those pencils and start demanding the quality, reliability, performance, security and affordability that was always an option for those who didn’t pre-emptively rule it out.