Skip to main content

A Student’s Journey Into Artificial Intelligence | Part 1: The History

In part one of this series on artificial intelligence, Mark Ray discusses some of the technologies and concepts that laid the foundation for what would become AI

The views and information expressed in this article series are those of the author.

I stared at my terminal session a long time. As part of a hardware refresh, we’d acquired an IBM AC922. The IBM tech had racked the machine, and we hooked it up to the network. I had planned on using it for a Linux project, but when that effort went in a different direction, I had an extremely high-power computer with no purpose. So there I sat, my terminal cursor blinking away. After what seemed like hours, I picked up the phone and called IBM—my ex-employer.

The Accidental AI System

The IBM AC922 is one of those Linux-only systems. Based on POWER9 technology, the system as it was installed in my data center came with forty (40) P9 SMT-4 CPUs, 256GB memory, a terabyte’s worth of internal storage and a mix of InfiniBand and Ethernet adapters. And most importantly, the NVIDIA Tesla V100 GPUs—each of these powerhouses has 640 Tensor cores; my system has four of these, interconnected with NVIDIA’s NVLink operating at 50Gbps. We will get to the nuts and bolts of this hardware in due course, but suffice it to say this system leaves most vanilla POWER9s in the dust insofar as raw computing power goes.

But a supercomputer does you little good unless you actually do something with it. At the moment, my AC922 may as well have served as a nice flower box. Fortunately, Jim from IBM called me back. I explained my situation to him, giving him my sad tale about how it was a shame to have this system just sitting there and how it all these fiddly-bits in it and—

“Hey Mark,” Jim said. I think he sighed. “You know this is one of our artificial intelligence systems, right?”

This was news to me. But I wanted to hear more. Condensing the next several days’ worth of conversations, Jim explained to me how IBM was getting into AI in a big way, not just with its hardware, but with a massive suite of software under the umbrella of “Watson.” The best part was, a lot of the Watson AI software was free, having been placed in the public domain. (As a quick aside, IBM has since sold off its Watson Health division, which was where I wanted to take my system, initially, but about five years ago, it was a flagship product for the gang from Yorktown).

I then engaged Jim on a project to breath some AI life into my AC922, starting me on a journey that has taken me across years, a library full of books, more online courses than I can count and a certification from the Massachusetts Institute of Technology. As I write this in early 2023, artificial intelligence (AI) is now my life.

Artificial Intelligence: Its Ancient Beginnings

Mankind has always imbued inanimate objects—especially those with human likeness—with AI. Mythology is filled with stories of statues, golems, gargoyles and a whole host of other stone or bronze creatures that spring to life with minds of their own, usually to wreak havoc on human beings. It’s one of the techniques ancient peoples used to help explain the world around them and formulate theories on why us flesh-and-blood folks acted the way we did.

Gradually, over the centuries, as humans evolved, so did their artificial creations. Leonardo da Vinci drew his conceptualization of what a “computing machine” may have been around the year 1493; what amounted to a mechanical abacus was never actually built, but it was a stake in the ground that seemed to mark a boundary between truly archaic thought where demons, dragons and spirits of all types permeated existence, and the burgeoning fields of scientific thought that brought us practitioners like Leonardo, Galileo and Isaac Newton. The years went by and the thinking about computation and the nature of intelligence evolved.

Charles Babbage was an English mathematician, philosopher, inventor and mechanical engineer—what we now call a polymath—who is considered by many to be the Father of the Computer. Babbage lived from 1791–1871 and conceptualized many remarkable devices, the best known of which is probably the Difference Engine, which he intended to be used as a mechanical device that used punched cards and would perform mathematical calculations, but it was never built. Later on, he proposed the Analytical Engine, which was considered to be the first mechanical computer. Some of Babbage’s inventions were partially assembled after his death by his son, but at the time they were considered mere curiosities and were relegated to history. But his concepts stuck, and more people started thinking about them.

Computing concepts evolved again.

Alan Turing’s Test and the Dartmouth Conference

There is no way to talk about the evolution of AI without talking about Alan Turing. Turing was born in London in 1912, and his contributions to computer science not only likely saved millions of lives during World War II, but may very well have allowed the Allies to win it. It was during the War at a place called Bletchley Park, in England, that Turing broke the Nazi Enigma code that allowed the Allies to intercept and decrypt Germany’s secret military communications; counter-offensives could be planned to Axis efforts, and eventually the Allies were victorious. Turing would go on to formulate many of the bedrock principles of computer science, but likely his most well-known—publicly, at least—contribution was the “Turing Test.”

The Turing Test involves a human evaluator engaging in a conversation with a machine and a human, without knowing which is which. If the evaluator cannot reliably distinguish between the machine and the human, the machine is said to have passed the Turing Test. Turing proposed the test, mostly as a thought experiment, in 1950 in his paper, "Computing Machinery and Intelligence." The Turing Test is not designed to validate the philosophical or theoretical definitions of intelligence, but to provide operational and practical values that can be applied to computers. With the Turing Test, another stake was driven into the ground, and AI research proceeded with many scientists attempting to build systems that could pass the Test.

In the summer of 1956, a conference was held on the campus of Dartmouth College. Now known as the “Dartmouth Workshop,” the conference is regarded as the place where AI as a bona fide field of research and development was founded. John McCarthy was a computer and cognitive scientist. It was he that coined the term “artificial intelligence” and organized the Dartmouth Workshop to bring together some of the best minds of the time to clarify ideas about thinking machines. McCarthy also developed the Lisp programming language, and, interestingly, invented the concept of garbage collection in computer processing. Working with scientists Marvin Minsky, Claude Shannon and Nathaniel Rochester, McCarthy formulated the proposal for the Workshop, which states in part:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The full text of the proposal can be downloaded from Stanford University and is well worth a read by anyone interested in the genesis of AI.

To say that the Dartmouth Workshop got a whole lot of scientists, mathematicians and engineers thinking about AI would be an understatement. The Workshop participants went on to teach AI concepts to their students and coworkers, who then went on to teach others. And so AI thought evolved yet again.

Beginning AI:  The Perceptron

In the signature field of my corporate email, you’ll find the following formula:

y = sign(Xw + b)

This is a simple form of the “perceptron formula.” The perceptron is a fundamental building block of many types of artificial neural networks, and it has been widely used in pattern recognition, speech recognition and computer vision, among other applications. In 1958, Frank Rosenblatt developed the perceptron. Perceptrons are the forerunners of modern artificial “neurons” that power the neural networks used in many implementations of AI. Rosenblatt implemented his first perceptron on an IBM 704, then in a piece of custom hardware called the “Mark 1 Perceptron.” The idea was to use the system to analyze images of boats for the United States Navy, as part of a secret four-year effort to develop the perceptron into a useful tool for photo interpreters. In a press conference, Rosenblatt made inflated statements about the perceptron which were summed up by the New York Times:

The perceptron is "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."

It turned out, the perceptron was no such thing; it was just too simple an implementation of both hardware and software to do much of anything, let alone power a machine that could walk, talk and do all the other things cited in the article. In point of fact, the perceptron so spectacularly failed to live up to its hype that a long period of stagnation began in which AI research virtually stopped, and funding for AI projects dried up. This was the start of what is now called the first “AI winter.”

But it is said that out of failure, sometimes comes the greatest successes. In our brains, we do not have just one neuron performing all the requirements that go into thinking. Although an exact number is difficult to pin down, humans average very roughly about 86 billion (that’s billion, with a “b”) neurons in their brains. Each neuron has a unique structure that allows it to receive and transmit electrical and chemical signals to other neurons, as well as to non-neuronal cells. The point is: A single neuron is pretty much useless. But get a whole lot of them acting as a team, and you can get a Plato, Newton or an Einstein as a result.  And so it is with perceptrons, or their progeny, artificial neurons. It was out of this concept that artificial neural networks were born, among a whole host of other AI technologies.

Expert Systems and the First AI Winter   

I managed a computer store in the 1980s. We had fun in those days. Our flagship hardware line was the Apple Macintosh, and we also carried PC clones. I had always been a fan of the movie “2001: A Space Odyssey” (if you’re reading this, you probably are as well), and wondered when computers could approach HAL9000s AI capabilities. Every day, a different software vendor would send me a package with their latest product that they hoped I would put on my store’s shelves. One day, I got a Fedex with a weighty box inside. The only writing on the box as I recall was “LISP 2.0.” There was a note that said the enclosed disks were a first step toward a thinking machine. I was intrigued. I loaded up the disks on a PC’s hard drive (incidentally, in those days, a 20MB hard drive cost upwards of $3,000!), and started plowing through the manual. I was tinkering with my first “expert system.”

In part two, we’ll look at how expert systems helped end the first AI winter and gave rise to the many AI algorithms and frameworks we use today.
 
Webinars

Stay on top of all things tech!
View upcoming & on-demand webinars →