Skip to main content

POSIX-tivity and IBM Z

“Standard,” according to en.wiktionary.org, can be used as a noun that means, “Something used as a measure for comparative evaluations; a model.” The history of standards is a deeply interwoven thread through the history of humanity. But the emergence of computing and IT over the past century has been a somewhat special yarn.

On one hand, computing technology was invented to respond to both standard and non-standard requirements, from calculating artillery trajectories and doing accounting to solving novel encryption schemes and predicting other enigmatic outcomes such as the weather.

On the other hand, both the means and the participants in developing computing have run the gamut from pure-play business, through government, military and healthcare, to the variously scrutable and scrupulous manifestations of academia.

As computing standards emerge and change based on the available state of the art in hardware and software, and the evolving requirements based on those growing capacities, one could easily describe the nature of those standards as “virtual,” if perhaps lacking in a certain “virtuality.”

One of the biggest challenges for such standards is whether they should be based on explicit needs or on proven approaches. A good example of the first of these is directories: The X.500 directory standard was invented and described long before there was any technical capacity to manifest it on any platform where there was a will to host it, and it took a very long time before the technology advanced sufficiently to manifest it. One of the consequences of this was the Lightweight Directory Access Protocol (LDAP), which was a standard for accessing existing directories in a manner consistent with the more complex X.500 standard without mandating those directories to be structured exactly as the more ponderous full standard required.

In contrast, POSIX (Portable Operating System Interface) was developed with reference to existing known technology. It was designed to maintain compatibility between operating systems. Because it was heavily based on UNIX-type environments it did not take much account of the nature and innovations specific to the IBM mainframe environment that had its origins in OS/360, which was designed specifically to run on System/360 hardware.

So, in the first case, everyone was excluded until the technology caught up. In the second case, the exclusion was greatest of those environments that weren’t explicitly part of the design considerations. Yet, even in the case of POSIX, there was still a strong aspirational element—so much so that, when it was propounded, there were no operating systems, not even versions of UNIX, that met its requirements, despite their being built around the nature of the UNIX operating system.

Teaching Old Dogs New Standards?

As we experienced mainframers know, there’s nothing like a challenge for the IBM mainframe to rise to prove itself again and again, and it turns out that POSIX was one such hurdle. Just as key organizations, particularly government ones, were beginning to demand that all production platforms rise to the requirements of POSIX, IBM did one better: They wrote a UNIX-like interface to their premier mainframe operating system environment that met all the POSIX standards while behaving as just another set of services, much like IDCAMS for VSAM files.

Suddenly, the world’s first truly POSIX-compliant environment existed, and it wasn’t even a UNIX operating system as such! Instead, it was an aspect of MVS (Multiple Virtual Storage, the operating system that still underlies today’s z/OS) that resulted in something called “Open Edition/MVS” and eventually came to be known as “UNIX System Services” (aka USS) as part of today’s z/OS operating system environment.

Now, many words could be—and have been—written and said about the “clunky” personality of USS, which was written from scratch to meet all the POSIX standards without copying anyone else’s code, and which was deeply interwoven with MVS and used things like the EBCDIC code page. But the fact is, it worked, ensuring that the IBM mainframe would continue to be a viably accepted participant in the world of large-scale IT. Not only that, but it became the basis of many ways that the platform ceased to be perceived as an island. Rather, it reached out to the rest of the world using TCP/IP and the numerous other UNIX-native capacities that the distributed world has come to take for granted.

From Eagles to Penguins

Meanwhile, the rest of the IT world wasn’t standing still. Rather, another new pretender to the POSIX throne was emerging: Linux. Now, to be clear, Linux is not a natural fit for POSIX compliance for the picky reason that it was being developed at the same time as POSIX, so some personality decisions were made without authoritative information about what would be considered standard. As a result, while most distributions of Linux adhere to many aspects of POSIX, they generally are not completely compliant. But they are still capable of doing quality work in a manner compatible with reasonably stringent IT requirements, which POSIX was meant to embody.

By the turn of the millennium, all three of these had fully emerged: POSIX, USS and Linux, and they were about to converge on the IBM mainframe. As afficionados may know, the traditional mascot for the IBM z/OS environment is the eagle, and Linux has emerged with a penguin for its mascot. But what would make Linux take flight on the IBM mainframe was a bird from another era—whose mascot was actually a teddy bear!

Teddy Bears and Virtual Penguins

So, there it was: The mainframe hardware just waiting for someone to compile the Linux source code to run on this “new” platform. But what do you do with all that spare capacity? Even running under LPAR, you still would be prone to run a relatively finite number of concurrent instances. But why not run a whole flock of them?

Thus, the great, original hypervisor, the source of the term “Virtual Machine,” GA on the mainframe since 1972, was pressed into service to make good use of the capacity of the mainframe hardware by running tens, hundreds, thousands, even hundreds of thousands of concurrent Linuxes on the mainframe. But what of USS? Didn’t that make it redundant?

On the mainframe, redundancy is the name of the game. On one hand, you could now recompile and only minimally modify applications that ran on Linux on other hardware platforms, making the mainframe an instant powerful option for cloud-type services. On the other hand, you could write or rewrite—or recompile with some greater modification—applications that could take advantage of the mainframe’s traditional strengths while still behaving as internet-connected UNIXes.

But if having one kind of hypervisor on the mainframe is good (two if you count LPAR), then maybe having more is better? Enter KVM, a hypervisor that emerged off the mainframe for hosting virtual machines running workloads such as Linux, and was subsequently ported to IBM Z.

Container Shipping

Then someone had the brilliant idea to pare down these virtual machines to their bare essentials and run them as glorified application address spaces, or “containers.” For some reason, a shipping and ocean theme emerged, with nautical symbolism including a ship’s steering wheel for the Kubernetes (K8s) approach and a whale for the Docker approach.

Now here’s the thing: Originally, one of the main drivers for the creation of IBM’s VM was to host z/OS and its predecessors, possibly alongside other IBM operating system environments such as what is now known as z/VSE. Suddenly, it was z/OS itself that got tagged to host these mini-OS container environments.

Looking back on the journey to the IBM mainframe becoming the place where POSIX has such a definitive manifestation, in OS environments, home in hypervisors and now echoes in container environments that provide similar contexts for OS compatibility, it appears that the destiny of such density has always been on the platform that was built from the beginning to handle world-class business.

In other words: IBM Z became the standard bearer for every kind of business IT virtualization!