Skip to main content

Fake It After You Make It: Emulation and the Mainframe

IBM Champion Reg Harbeck talks emulation, its history and its uses on the mainframe

I get a kick out of how many technical terms and concepts predate electronic computing and have become so pervasive in their IT usage that we’ve often forgotten about their analog origins and meanings. The most obvious of these is probably “computer.”

Another one of those terms is “emulate.” Although it might seem a bit eccentric for this word to be used in everyday, non-IT conversation, the concept it represents is pervasive, but almost reversed for human applications. What I mean by that is: When a maturing person wishes to be more like a role model, they are likely to emulate certain aspects of that person’s behavior. We see that embodied in popular slogans ranging from, “Dress for the job you want,” to “Fake it until you make it.” So, the highest goal of such emulation is to become like an established success, but rarely to displace them in a Barry-Lyndon-esque usurpation.

In the world of IT, on the other hand, emulation is consistently about one technology behaving as if it were another already established one, in order to displace or supersede the old one. Such emulation may itself then eventually be superseded, or it may become the definitive, evolving manifestation of the old system.

Imitation: More Than Just Flattery

Interestingly, there are several dividing lines that one may apply with varying degrees of success to highlight distinctions from emulation—the first being imitation. We see this difference when we contrast the original System/360’s ability to emulate the previous IBM 1401 computing platform. In doing this, as we learn from Emerson Pugh’s “Building IBM” (p. 273), it was competing with Honeywell’s H-200 that imitated the 1401. Both are examples of simulation, but the competitor’s simulation is unsanctioned, whereas the IBM emulation had the authority of the originator of the emulated technology.

For the next few decades, the System/360 and its successors were simulated by plug compatible manufacturer (PCM) platforms such as Amdahl, Fujitsu and Hitachi (not to mention some less official imitations from behind the Iron Curtain—see ES EVM). It was only when IBM transitioned from their original bipolar hardware mainframe technology to CMOS, coincidentally around the same time that the Iron Curtain fell, that the PCMs also fell by the wayside.

One of the challenges of such emulation, which was generally overcome with the help of Moore’s Law, was to have acceptable performance of the simulating system compared to the original, given the additional overhead of pretending to be something else. Another challenge was to offer compelling superior functionality that fully encompassed the relevant strengths of the simulated system and also offered cost-beneficial aspects.
A consequent additional challenge was divergence away from the definitive original system in a way that could introduce incompatibility if the original system continued as well. IBM responded to this in a double-edged manner: a radical dedication to compatibility with previous versions of their hardware, combined with a definitive path forward that could be reliably followed as the technology advanced. This was yet another nail in the coffins of the PCM competitors.


Microcode, a lower layer than the machine language architecture documented in IBM’s Principle of Operations manual, was one of the mechanisms that IBM used early on to emulate previous technologies, and to implement new ones. Implicitly, this meant that emulation involved some type of software. However, the more layers of software, the more inefficiency that may be introduced, so a pure software emulation of the mainframe would be unlikely to perform at the same scale as a true mainframe.

Which, as history shows, is fine if you don’t want to run production-sized workloads on the emulated machine. Maybe you just want to use it for development, testing or hobby purposes.

But emulation extends beyond platforms. One example is emulation of hardware devices using software. Another is turning entire platforms into virtualized machines—the stuff of clouds.

Virtualizing Devices

Let’s look at emulated hardware devices first. Two big examples on the mainframe are tape and terminals.

Tape Storage

Tape is one of the original storage media used by computers and was already common before the punch card era wound down. Indeed, as the characteristics of punch cards—such as their fixed 80-character width—slowly migrated to other devices, not so much via emulation as conformity, tapes continued to exhibit a staying power that was at least partly due to the range of different applications and middleware that assumed the existence of tape as a definitive sequential storage and data processing medium.

Strengths of tape include:
1. Cost: A large amount of capacity at a lower price than disk (OK, “DASD,” or “dazz dee” as we mainframers say to differentiate ourselves from other platform professionals)
2. Flexibility: Moving the contents of tape between places to transfer programs and data
3. Energy savings: Tape at rest uses no electrical energy But the first two have gone by the wayside with the advent of affordable random storage and internet data transport.

Weaknesses of tape include:
1. Security: Loss in transit or illegitimate access by unauthorized parties when not tightly controlled
2. Wasted capacity: Massive tapes holding only a tiny amount of data
3. Physical storage capacity: Individual tapes may be underutilized
4. Time: Waiting for individual tapes to be mounted (even when using robotics)
5. Risk: The requirement for many drives and other moving parts increases the likelihood of physical device problems

One of the most popular solutions? Emulation! Virtualizing and stacking tapes, so that data is moved off of DASD and onto actual tape when it truly needs to be at rest and can ideally be placed alongside lots of other data on the same physical tape. Until then, the operating system, middleware and applications may think they’re talking to tape, but they’re actually working with tape emulations that pretend to be tape while reducing mount times to zero, as no actual drive or physical mounting are required, and keeping the data on much more securable random access storage media.


Then there’s the carbon-based peripheral device that introduces so much delay and insecurity to any platform: users. And their terminals.

Time sharing. Seriously! Until users could log on with a terminal and have a concurrent piece of a mainframe alongside everything else that was happening, they just had to content themselves with punch cards. Type up a card deck (don’t forget line numbers in case you drop it and have to sort everything back in order!), submit it for batch processing, wait for the result, fix the errors and repeat.

By 1971, mainframe users could use CICS, TSO and other online systems to interact directly with the mainframe, and 3270-style terminals became a de facto standard for such interaction soon thereafter. But they required a physical terminal, attached directly with a coax cable to devices that led directly to the mainframe. Nothing virtual about it, despite the name of IBM’s VTAM terminal management software (“Virtual Terminal Access Method”), introduced in 1974 at the height of IBM waving the “virtual” flag over anything new that ran on their mainframe.

It was the arrival of IBM PCs that led to true virtualization of the terminals—more specifically, emulation of the hardware using a network card and PC3270 software (and a coax connection) in 1983. It wasn’t until 1988 and RFC1041 that this terminal emulation went fully virtual with the advent of telnet 3270 or TN3270, which allowed TCP/IP-based terminal emulation from an arbitrary IP address using software for the entire emulation. Indeed, this could be seen as one of the original TCP/IP cloud computing forays for the mainframe.

Over the coming decades, coax-attached 3270-style physical hardware terminals began to be increasingly replaced, first with PC3270 (and Attachmate and others) PC-based coax-attached emulations, then with TN3270 on arbitrary TCP/IP-attached workstations, and now even personal computing devices that include phones and tablets. And all this while, innovation of the 3270 device definition didn’t stagnate. Nor did the standard ways for using it. So, SAA and CUA elaborated standards that covered multiple terminal types, including and beyond those using the 3270 data stream. And it became rarer and rarer to locate hardware devices that had functionality as rich as was considered the norm with software 3270 emulators.

Meanwhile, Unix System Services (USS)—first known as OpenEdition MVS—and then Linux both arrived on the mainframe, bringing with them VT100-style non-mainframe terminal emulation. And they also brought rich graphical interfaces, often served up through web serving software such as Websphere and Apache. Ironically, one of the many things these web interfaces could offer was simulated 3270 interfaces. I hesitate to call them emulated because they often omitted features that were not directly relevant to the interface purpose at hand.

Modern Emulation

Today, I often encounter virtual Windows workstation systems that I access through my browser, which include TN3270 emulation software such as QWS3270. This is particularly common when I teach mainframe courses over the internet.

But wait…we haven’t yet gone full circle, but we’re about to. Because this brings us back to emulating entire platforms on other platforms!

Or is it simulating? I guess that depends partly on who the source of the emulation software is, compared to who owns the platform that is being…um…simulated.

Because here’s one of the mainframe world’s biggest open secrets: A very old version of IBM’s mainframe operating system and hardware have been available to run as a simulation on PCs for quite some time. Not only is it usable only by hobbyists who don’t need mainframe performance or other qualities of service, but it’s not even officially sanctioned. And it has been upgraded in various ways over the years, too. But what people wanted was an officially sanctioned “version” of the mainframe that doesn’t require individual users and small companies to buy an entire mainframe, or even a slice of one through an outsourcer.

But how do you emulate the true strengths of the mainframe such as cavernous capacity, massive data throughput and system-of-record-class security? History tells us that simulation is not sufficient.

Part of the answer to that question was provided before the release of MVS 3.8j, the last open-source version of IBM’s premier mainframe operating system, which was available in 1974. Two years prior, IBM’s Virtual Machine (VM) hypervisor environment became available in 1972, effectively emulating numerous mainframe images concurrently as if each had an entire mainframe to itself. Was it emulation? Wasn’t it?

Today’s mainframe not only allows for hypervisors such as z/VM and KVM, but also containers with Linux on them running directly under z/OS, the premier IBM mainframe operating system. And with the advent of mainframes as small as rack-mounted systems, and virtual mainframes that are even smaller via hypervisors, one may wonder if emulation has become identical to what it emulated…a true case of “Fake it until you make it?”

Stay on top of all things tech!
View upcoming & on-demand webinars →